Artificial intelligence (AI) is a field of computer science referring to intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Social media platforms use artificial intelligence technologies such as natural language processing to understand text data, and image processing for facial recognition.

In some instances, regulation tries to create a "legal" definition of AI. For example, a law requiring disclosure of chat bots defines "bot" as "an automated online account where all or substantially all of the actions or posts of that account are not the result of a person." Article 22 of GDPR provides for the right not to be subject to a decision based solely on "automated processing, including profiling" with legal or significant impact. AI laws also refer to driverless vehicles. These legal definitions of AI determine whether the law applies to the particular AI process or system.

The current legal framework for AI can be grouped as follows:

(1) regulation specific to AI technology (e.g. automated decision making, facial recognition)

(2) regulation specific to a use case or industry application (e.g. finance, health, human resources);

(3) legal accountability for (unintended) consequences by use of AI (e.g. criminal, civil); and

(4) voluntary ethics codes;

Regulations are being introduced or proposed specific to AI technology such as those directed to facial recognition software. For example, cities and regions propose to ban use of facial recognition technology by police and other municipal agencies. A major body camera company voluntarily banned facial recognition software. Blunt regulatory instruments that apply to AI should clearly define the technology and limited fields of use to avoid overly broad application and stifling research and development.

There are regulations specific to a use case for AI technology such as healthcare and finance. For example, the regulatory approach of a medical decision support tool with AI software might change if assumptions or limitations of the software are clear. This might include limitations of training data, selection of features, and algorithmic assumptions. As another example, the use of AI as a human resource tool for hiring and promotion is subject to employment and discrimination laws.

In some cases, AI software code can change as a result of machine learning which can result in unintended consequences such as privacy violations, criminal liabilities, and reputation risk. An autonomous vehicle can cause property damage. AI can generate fake images and videos that can be used to spoof facial authentication systems and commit theft, for example.

Artificial intelligence poses novel ethical considerations. These complex systems automate decisions that were traditionally in the human realm. Law attempts to codify policies which are often driven by ethical or moral principles. There is now a (very) long list of voluntary codes for AI ethics such as ethical frameworks, principles, oaths, tool kits and declarations. Some of the voluntary codes are directed to a global audience and others are country specific or directed to a use case. There is a UK Data Ethics Framework. There is a US Department of Defence report that outlines ethics in AI as responsible, equitable, traceable, reliable, and governable. The principles are often described in broad terms which makes it difficult to operationalize these codes internally. Compliance and enforcement are also challenges. A company might make misleading statements such as "only using ethical AI" or "developing AI for good" even though their operations are not in compliance with the relevant voluntary code. In some instances, AI can be used to enforce AI ethics. For example, so called audio " deepfakes" involve computer generated audio similar to a human voice. Canadian company Dessa built a deepfake decoder to help combat misuse. To discern between real and fake audio, the detector uses visual representations of audio clips called spectrograms, which are also used to train speech synthesis models. While to the unsuspecting ear they sound basically identical, spectrograms of real audio and fake audio actually appear different from one another to their decoder. See https://medium.com/dessa-news/detecting-audio-deepfakes-f2edfd8e2b35

Given the widespread adoption of AI, new law will be created that hopefully maximizes its benefits and reduces harm. Companies developing or deploying AI should diligently track legal updates.


About Norton Rose Fulbright Canada LLP

Norton Rose Fulbright is a global law firm. We provide the world's preeminent corporations and financial institutions with a full business law service. We have 3800 lawyers and other legal staff based in more than 50 cities across Europe, the United States, Canada, Latin America, Asia, Australia, Africa, the Middle East and Central Asia.

Recognized for our industry focus, we are strong across all the key industry sectors: financial institutions; energy; infrastructure, mining and commodities; transport; technology and innovation; and life sciences and healthcare.

Wherever we are, we operate in accordance with our global business principles of quality, unity and integrity. We aim to provide the highest possible standard of legal service in each of our offices and to maintain that level of quality at every point of contact.

For more information about Norton Rose Fulbright, see nortonrosefulbright.com/legal-notices.

Law around the world
nortonrosefulbright.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.