When ChatGPT was launched last year, it took the tech world by storm thanks to its role as a smart coding assistant and its ability to speed the writing process for workers in any industry. Since then, generative AI systems for a variety of uses have gained fans all over the world. Emerging technologies companies – even those not involved in artificial intelligence – are increasingly relying on AI systems for generating content. This rapid proliferation and a few well-publicized failures and abuses of AI's abilities have led some to call for a pause in the technology's development.

While AI's potential is too vast to make limiting use cases and applications a viable option, technology companies and other businesses must be aware that the potential for failure exists. Here, we examine some legal issues which can arise when using AI and how system developers and emerging technologies companies could be held liable for harmful content produced by their generative AI systems.

Recent Cases

Recently, two related landmark cases have reignited discussions surrounding the liability of digital and tech entities for user-generated content on their platforms. In May 2023, the United States Supreme Court examined the potential liability of internet service providers under Section 2333 of the Antiterrorism and Effective Death Penalty Act of 1996 for possible 'aiding and abetting' a designated foreign terrorist organization through user-recommended content. This case, Twitter Inc. v. Taamneh, resulted in a unanimous Supreme Court reversal of a 9th Circuit opinion. The High Court held that the allegations against Twitter and other companies were not admissible under the Antiterrorism Act. The court did not address the issue of Section 230 in their ruling. This decision was consistent with the Supreme Court's per curiam ruling in Gonzalez v Google LLC, where the Court refrained from resolving the legal question of whether the protections outlined in Section 230 of the Communications Decency Act shields Google's from allegations that it aided and abetted international terrorism by allowing ISIS to use its YouTube platform "to recruit members, plan terrorist attacks, issue terrorist threats, instill fear, and intimidate civilian populations.". In this instance, the case was remanded to the lower court for further examination in light of the Twitter decision.

Legal Risks for Developers & Emerging Technologies Companies

The fundamental question is whether developers of generative AI assume any liability for illegal content generated by their platforms. Potential examples of such liability include defamation, copyright infringement, data breaches or breach of privacy, and others.

Libel and Defamation
Developers of generative AI and emerging technologies companies can be held liable for libel and defamation under state law for any content posted either directly through them or under their authorization via a generative AI system. Under the tort of defamation, which includes both libel (written statements) and slander (spoken statements), a false statement that injures another person's reputation is actionable. While AI software, not being a legal entity, cannot be held legally responsible for defamatory statements it produces, companies that create, post, or distribute such content may face legal action if they are found to have acted negligently in doing so. For example, a company could be liable if it publicizes certain statements the program is communicating that it knows could be false. Additionally, internet platforms hosting third party-generated information may also be subject to defamation laws. Truth is an absolute defense to a defamation claim; however, proving truth can be difficult and expensive. It's important for companies to take steps to ensure the accuracy of AI-generated content before posting or distributing it.

Copyright Infringement
Contributory or secondary copyright infringement occurs when a person or company has knowledge of another's infringement and either materially contributes to it or induces it. This means that even if a generative AI system produces content without any human intervention, companies may still be held liable for contributory or secondary copyright infringement if they had sufficient understanding and knowledge that the generated content was infringing upon someone else' work. Companies should take steps to protect themselves from this kind of liability by obtaining permission from the original author before using any material created by a generative AI system.

Contributory or secondary copyright infringement is a serious offense with potentially severe consequences. Companies should take care to ensure that they are not engaging in any activities which could lead to liability for contributory or secondary copyright infringement. Additionally, companies should consult with legal counsel to make sure they are taking all necessary steps to protect themselves from such liability.

Breach of Privacy
AI systems often collect personal, demographic, and financial data from a number of sources, which in the wrong hands could be used to steal from, embarrass, or otherwise harm people. Data privacy professionals have raised alarms about the methodologies employed for collecting training data, primarily those involving web crawling. For instance, some civil liberties advocates have objected to the practices of facial recognition firm, ClearviewAI, which is known to scrape data from a wide array of social media platforms. The exact mechanisms OpenAI uses to collect data remain nebulous and locked away in a black box. However, it's worth noting that ChatGPT's privacy policy states that it does employ user-input information for training and refining the AI models. It contends that users, by interacting with the app, implicitly consent to their data being harnessed by the company. This could explain why Italy's data regulator in March 2023 barred OpenAI from further use of personal data belonging to millions of Italian citizens. This regulatory action has since sparked a trend, with regulators, in France, Germany, and Ireland, following suit. Meanwhile, the stance of US privacy regulators on the utilization of personal data by generative AI systems remains undetermined. Given these complexities, it is advisable to engage the expertise of a law firm specializing in AI, emerging technologies, and associated fields to fully comprehend the wide array of issues at play before embarking on any personal data collection endeavor.

Other Threats and Protections
Developers and emerging technologies companies may be held legally liable in certain jurisdictions if the content produced by their generative AI system is deemed to be overly sexually explicit, blasphemous, racially insensitive, immoral, or unethical.

However, there exist laws that potentially provide a shield to these entities for content produced by generative AI systems. Section 230 of the Communications Decency Act, which offers protection to online intermediaries for content hosted on their platforms. This U.S legislation governs online intermediary liability through two primary provisions. The first, Section 230(c)(1), shields online services from liability for third party content hosted on their platforms. The second, Section 230(c)(2), protects them from legal consequences for removing objectionable third-party content. It's important to note that Section 230's protections do not extend to federal criminal law, state or federal sex trafficking law, or intellectual property law. Historically, Section 230 has provided a robust defense for internet platforms against liability for user-generated content. However, with the evolution and sophistication of AI technology, the distinction between content creators and content hosts is becoming increasingly blurred. This has sparked debates around whether AI-powered platforms like ChatGPT should bear responsibility for the content they generate.

In the realm of intellectual property law, particularly concerning copyright infringement, developers and emerging technology companies might find refuge under the Digital Millennium Copyright Act (DMCA), which provides certain protections for online intermediaries. However, it is always prudent to seek legal counsel to navigate these complex legal landscapes.

Conclusion

The increasing prominence of generative AI technologies like ChatGPT highlights the urgent need to address the legal liability for the content produced or generated by these technologies. While the basic rules pertaining to legal liability are similar, it is important for developers, companies, regulators, and the public to come together to chart a responsible course for AI development that safeguards the best interest of all the stakeholders and promotes a sustainable and equitable future. As there are no specific laws addressing legal liability for AI powered tools, it is best to speak to a law firm specializing in new age technologies to understand the best course of conduct (which is legally and ethically compliant).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.