ARTICLE
17 October 2023

Key Legal And Operational Risks For Enterprise AI

FL
Foley & Lardner

Contributor

Foley & Lardner LLP looks beyond the law to focus on the constantly evolving demands facing our clients and their industries. With over 1,100 lawyers in 24 offices across the United States, Mexico, Europe and Asia, Foley approaches client service by first understanding our clients’ priorities, objectives and challenges. We work hard to understand our clients’ issues and forge long-term relationships with them to help achieve successful outcomes and solve their legal issues through practical business advice and cutting-edge legal insight. Our clients view us as trusted business advisors because we understand that great legal service is only valuable if it is relevant, practical and beneficial to their businesses.
The ability of machines to learn and improve without explicit instructions has the potential to revolutionize many industries...
United States Technology

The ability of machines to learn and improve without explicit instructions has the potential to revolutionize many industries, but businesses that use AI must be aware of the legal and operational risks that come with it.

The adoption of artificial intelligence tools in the enterprise is set to accelerate as venture capitalists and large corporations alike deploy billions of dollars toward creating and releasing new foundation models. Following ChatGPT's rapid adoption, arguably the fastest adoption of a new technology application in the history of science, there is no turning back. Fasten your seatbelts and prepare for the disruption. The ability of machines to learn and improve without explicit instructions has the potential to revolutionize many industries, from health care to finance and up the courtroom steps and ultimately into the judge's chambers. However, we all know that with great power comes great responsibility, and businesses that use AI must be aware of the legal and operational risks that come with it.

Potential for AI in the Enterprise

As evidenced by the billions of dollars in new capital being deployed to OpenAI, Anthropic, Stability AI, and other startups, the opportunities for new businesses to take large data pools and monetize them within new foundation models that provide premium services, will disrupt consumer and enterprise applications forever. Creators of intellectual property have the opportunity to unlock untapped revenue streams. Developers and enterprises will pay for these premium services, potentially on a volumetric basis. At last, enterprises will be able to consume advanced analytics scalably.

Public markets reward those enterprises out in front (e.g., Nvidia) and punish those who look to be replaced by ChatGPT and its progeny (e.g., Buzzfeed).

A recent study published by MIT suggests that the deployment of AI in the enterprise is rapidly accelerating across functions and departments, landing and expanding at greater velocities. A recent Pitchbook Data report estimated the value of the AI and machine learning market at US$197.5 billion in end user spending in 2022 and forecasts that spending will double by 2025.

Amid the hype-cycle, in a somewhat stunning development an unexpected group of bedfellows across academia, the entrepreneur community, "big tech" and old economy corporate executives self-styled as the "Future of Life Institute" came together to publish an open letter and policy recommendations demanding that governments around the world mandate a moratorium for all AI labs to pause, for six months at least, the training of AI systems more powerful than GPT-4. The moratorium was demanded to protect against the profound risks to society and humanity in light of the lack of planning for or management of machines that no one — not even their creators — can understand, predict, or reliably control. The "pause" is demanded so that AI labs and independent experts can jointly develop and implement shared safety protocols for advanced AI design and development that would be rigorously audited and overseen by independent outside experts, ensuring safety beyond a reasonable doubt. Simply put, the open letter demands that humanity design AI governance to ensure that humans control machines rather than ceding control to machines.

While the open letter has yet to lead to governmental action, entrepreneurs, executives, and investors would do well to consider resolving ownership questions regarding underlying intellectual property and mitigate the obvious legal and operational risks to AI development and deployment as they design the roadmap forward.

Fundamental Questions of IP Ownership

Artificial intelligence engines are receiving terabytes of data in the form of text, video, audio, and images (the "inputs"), running large language models and algorithms on this data, and then generating responses to queries (the "outputs"). In other words, given that the inputs inform the outputs, the debate is raging about who owns the intellectual property created by the AI engines. Who is the creator? Is it the author of the original content that the AI engine used to train itself, or is it the engine's designer that created the outputs? Another question before the courts is whether the outputs can benefit from copyright laws at all, given that there is no human creator. Does the AI training process infringe copyrights in other works? Do AI outputs infringe copyrights in other works? Is the process of scraping copyrighted data and using it to train AI engines that create outputs a "fair use" that is protected under U.S. copyright laws? Lawsuits from authors and artists demanding compensation are dropping all over the United States. The consequences of losing could be significant.

Legal Risks of AI

Another of AI's most significant legal risks is the potential for bias. AI systems are as good as the data they are trained on. If that data is biased, the AI system will also be biased. This can lead to outcomes that violate anti-discrimination laws. For example, an AI hiring system trained on historical data that reflects biased hiring practices may perpetuate that bias and result in discrimination against certain groups.

Another legal risk of AI is the potential for violating privacy laws. AI systems often require access to large amounts of data. If that data includes personal information, businesses must comply with relevant privacy laws, such as the General Data Protection Regulation (GDPR) in the European Union or the California Consumer Privacy Act (CCPA) in the United States.

As both legislation and regulation lag behind advancements in AI, it is also difficult for companies to prepare for the regulatory measures that will no doubt be coming.

Published April 2023 by Legaltech News

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More