Artificial intelligence (AI) creates many opportunities, but also some special legal challenges, when used in businesses. Improper handling can both create unwanted risks and limit opportunities. How will the EU's upcoming rules on AI affect your business, and what other legal rules should you be aware of when using AI?

Growing integration of artificial intelligence (AI) into common business applications, combined with the possibilities offered by generative AI, has accelerated AI's infiltration into 'ordinary' business applications and enterprises.

There are many opportunities but few absolute legal restrictions on the use of AI in ordinary businesses. For many enterprises, the risk of adopting AI might be perceived as less than the risk of not doing so. However, one should be aware of the challenges AI usage may entail and seek to handle these in an appropriate manner.

The EU is in the process of adopting its own regulations for AI, imposing certain obligations on providers and users of AI based on the system's risk classification and the role they play. Additionally, privacy rules pose specific requirements for AI usage, and employing AI may also entail other types of obligations and responsibilities.

AI REGULATION

What is the AI Regulation?

The upcoming 'Artificial Intelligence Act,' also known as the 'AI Act' (hereinafter 'AI Regulation'), is the EU's attempt to provide a specific legal framework to regulate certain aspects of AI.

The regulation aims to regulate the use of AI systems in a sector-agnostic and as technology-neutral a manner as possible. The goal is to ensure responsible development and use of AI systems in line with fundamental rights and to facilitate the free flow of AI-based goods and services within the EU. However, it primarily constitutes public law regulation and has limited influence on more private law matters related to AI.

Currently, only various drafts of the text exist, but the EU aims to adopt the regulation by the end of 2023. It will come into force in the EU two years after adoption, and Norway will presumably aspire to implement the regulation on a similar timeline as the EU.

Definition of AI

There is no universally accepted definition of AI, but the latest proposal from the European Parliament defines an AI system as:

'a machine-based system that is designed to function with varying levels of autonomy and can, for explicit or implicit purposes, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.'

The definition aims to encompass various types of AI systems and technologies, both current and future, and is also presumed to be harmonized with internationally recognized AI definitions.

Risk-Based Approach

The AI Regulation adopts a risk-based approach to regulate AI, operating with different risk categories for AI, where higher risk categories entail stricter regulations. AI systems that conflict with fundamental values are deemed to pose unacceptable risks and will generally be prohibited. Systems with high risks will be subject to conformity assessments and registration, among other requirements. Most AI systems for use in ordinary businesses are assumed to be classified as low risk under the AI Regulation, thus imposing limited obligations. However, it's worth noting that the AI Regulation primarily measures risk against fundamental rights (human rights) rather than commercial, technical, or legal risks.

Who is Affected by the AI Regulation?

The AI Regulation primarily imposes obligations on providers of AI systems, focusing on AI systems that pose high risks to fundamental rights. Providers of such AI systems will, among other things, be required to conduct conformity assessments and register. There are also specific requirements for core models and general AI systems, including generative AI.

Businesses merely utilizing AI systems internally, without being providers themselves, are subject to limited direct obligations under the AI Regulation, limited to AI systems with high risk. Using high-risk AI will, among other things, require risk assessments and human oversight, necessitating adequate resources and expertise.

Significance

The focus of the AI Regulation on providers and high-risk systems is likely to have less direct impact on ordinary businesses than, for instance, the introduction of the GDPR.

However, it's crucial to be aware of the risk classification of the AI systems one offers or uses under the regulation. Enterprises within critical societal areas such as transportation, health, energy, finance, telecommunications, etc., must also be aware that the use of AI in their domains often classifies as high risk.

A governance framework with supervisory authorities and market monitoring is planned both at the European and national levels, along with certification mechanisms, etc. Violations of the AI Regulation can result in fines of up to 7% of the annual turnover worldwide or EUR 40 million, whichever is higher. Experience with the GDPR suggests that the sanction regime will likely be fully employed against enterprises that do not comply with the rules.

OTHER REGULATIONS

General

The AI Regulation does not limit obligations arising from other regulations and does not regulate private law matters or matters subject to other specific legislation. However, compliance with the AI Regulation may indirectly affect assessments under other relevant regulations, depending on the circumstances.

It's also noteworthy that the risk classification on which the AI Regulation is based does not necessarily correspond to the commercial and legal risks that the use of AI may entail for a business under other regulations. An AI system with a low-risk classification under the AI Regulation may still pose high risks or demand significant compliance under other legal rules.

Special Legislation

Special legislation contains several rules that may impact the use of AI. Apart from sector-specific regulations that may apply to certain types of enterprises, especially within critical sectors, sector-agnostic regulations related to privacy and digital security are particularly relevant concerning AI systems. For instance, using AI systems for processing personal data in many cases will require conducting a Data Protection Impact Assessment (DPIA), and the use of AI systems interfaces with digital security concerning attack surfaces and defense.

Liability

The use of AI can also lead to liability under general principles of tort law. Someone must be held accountable for damage caused by AI, but how risks and liabilities in this context should be allocated is not clear.

In cases where there exists a contractual relationship between the potential wrongdoer and the potential victim, liability will often be regulated in the contract.

Outside contracts, there might be a form of product liability for those manufacturing or offering an AI system. The EU, for instance, has proposed expanding product liability rules to cover AI systems. However, it's also conceivable that a business using AI could be held liable under general principles of tort law based on fault liability or statutory strict liability.

The EU has also proposed an AI Liability Directive that would make it easier for victims to make claims for damages related to the use of AI systems.

Intellectual Property

Challenges The use of AI raises several intellectual property challenges, concerning the use of training data for machine learning purposes, information security, and protecting the output of the system. For instance, using data for machine learning purposes could potentially infringe upon intellectual property rights, privacy, or information security concerning third parties. Presently, the intellectual property starting point is that what is created by an AI system has limited (if any) protection, even though a similar outcome created by humans would be protected.

Enterprises utilizing data for machine learning purposes or employing AI in the production of text, images, video, music, or services should be aware of this and seek to regulate the associated risks through contractual provisions and other measures.

RECOMMENDATIONS FOR AI USAGE

In general, there are many opportunities and few absolute legal limitations concerning the use of AI in ordinary enterprises. Nevertheless, from a corporate governance perspective, it is essential to be aware of the risks that AI may entail, not to take unwanted or uninformed risks associated with AI, and to take action when necessary or appropriate. Legal leeway and risk may also directly correlate with the assessments made and the solutions chosen.

Businesses should map their current use of AI, as well as future plans, and develop a strategy and guidelines for AI usage.

Limiting AI use or creating unnecessarily extensive governance and management systems for AI is not a goal in itself. However, conducting a risk analysis of the obligations, risks, and other consequences that AI usage entails for the business, both from a regulatory and commercial standpoint, is advisable.

Furthermore, it's important to examine the technical, organizational, and contractual measures that the company can and should take to handle the risks associated with AI use. There might be synergies in aligning AI approaches with the company's measures related to privacy and information security. Guidelines could also encompass employees' use of freely available AI services such as ChatGTP.

The risks and characteristics of AI systems should also be considered in the company's strategies for AI use and in the contracts it enters into. Traditional contract formats and mechanisms may not necessarily be suitable for regulating AI systems. As a customer, considering requirements for the supplier that allow compliance with the obligations imposed by AI use is advisable.

The use of AI can raise ethical issues, and the AI regulation recommends that even AI systems with minimal risk should be subject to ethical guidelines.

Ordinarily, the day-to-day management of AI should not fall to the company's board. However, boards in companies where AI use is relevant should have this on their agenda concerning strategy, competitiveness, and risk management.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.