On April 21, the European Commission published the landmark Proposal for a Regulation on a European Approach for Artificial Intelligence(the Proposal), which sets out the "first ever" legal framework for addressing the risks associated with artificial intelligence.

This bulletin summarizes the European Commission's Proposal and compares it to Canada's current regulatory landscape, including the federal and provincial governments' recent efforts to create more responsive laws governing AI.

What you need to know

  • The Proposal defines AI systems to include software that is developed with one or more named techniques and which can, for a given set of human-defined objectives, generate content, predictions, recommendations or decisions.
  • The framework proposes a tiered risk-based approach aimed at balancing regulation with encouragement of trustworthy innovation. The Proposal's key obligations are grounded in data governance and management, accuracy, transparency, robustness and cybersecurity.
  • The Proposal has wide material and territorial scope. It aims to regulate providers, distributors, importers, manufacturers and users that place AI systems on the market or in their products and services. Like GDPR, the Proposal is intended to have extra-territorial application.
  • Companies that violate the proposed regulations could face fines of up to 6% of their worldwide annual turnover.
  • Businesses in Canada and the U.S. should focus on proactive data governance and transparency around AI systems to anticipate international and domestic regulatory developments.

European Commission's AI Regulation Proposal

The Proposal, which aims to position the EU as a market leader in trustworthy AI, is the newest addition in the EU's broader data and digital initiative that includes the Data Act (forthcoming), Data Governance Act, the Digital Markets Act, and the Digital Services Act-all of which are currently making their way through the EU's legislative process.

Risk-based approach to AI regulation

The Proposal's risk-based framework allows for a more nuanced and proportionate regime than the blanket regulation of all AI systems. Under this framework, the types of risks and threats regulated based on sector and specific cases.

  1. "Unacceptable risk" AI systems:: AI technologies that pose a clear threat to people's security and fundamental rights are deemed to pose an unacceptable risk to society and would be prohibited. Unacceptable risks include AI systems that deploy subliminal techniques to materially distort a person's behaviour in a way that may cause "physical or psychological" harm and systems that exploit the vulnerabilities of specific groups. The Proposal also prohibits "real time" remote biometric identification systems in publicly accessible spaces (i.e. live facial recognition) for the purpose of law enforcement, except in limited circumstances.
  2. "High-risk" AI systems: systems in this category must comply with a strict set of mandatory requirements before they can be placed on the EU market. Companies must provide regulators with proof of the AI system's safety, including risk assessments and documentation explaining how the technology is making decisions as part of a formal registration process. Organizations must also establish appropriate data governance and management practices, and ensure the traceability, transparency and accuracy of their datasets and technology. Organizations must also inform end-users of the characteristics, capabilities and limitations of performance of the high-risk AI systems and guarantee human oversight in how the systems are created and used. After a high-risk AI technology is sold or put into use, AI system providers are also required to establish "proportionate" post-marketing systems to ensure "continuous compliance" with the regulation. Overall, the obligations vary depending on whether the business is a provider, product manufacturer, importer, distributor, user or other third party.

    Given these strict regulatory requirements, the Proposal specifies that the "high-risk" designation should be limited to those systems that have a significant harmful impact on the health, safety and fundamental rights of persons in the European Union. Annex III of the Proposal lays out eight categories of high-risk AI systems, which include: management and operation of critical infrastructure; education and vocational training; employment; access to and enjoyment of essential private services and public services and benefits; migration, asylum and border control management; and the administration of justice and democratic processes.
  3. Low-risk systems: AI systems that are designated as low risk are not subject to the same regulatory obligations as they don't pose the same threat to health and safety, EU values or human rights. However, transparency obligations would apply for systems that interact with humans (e.g., chatbots), are used to detect emotions or determine association with social categories based on biometric data (e.g., employee monitoring technologies that use emotion-recognition capabilities to differentiate between "good" and "bad" employees), or generate or manipulate content (e.g., "deep fakes").  

Other than the end-user transparency requirements, the Proposal does not address privacy requirements associated with the processing of end-user personal data by AI-systems once they are in operation. Presumably, this was done to ensure GDPR remains the central regulation related to personal data protection, including for the purposes of AI systems.

Enforcement

The Proposal's enforcement regime mirrors GDPR's enforcement framework. A European Artificial Intelligence Board, chaired by the European Commission, will supervise and coordinate enforcement of the regime. National supervisory authorities will oversee the implementation of the regulation in member states. Accordingly, member states will not be required to create new AI regulatory bodies.

Next steps

Given that the European Commission's framework is only at the proposal stage, it is unlikely that the proposed regulations will apply prior to 2024. The Proposal must now go to the European Parliament and Council for further consideration and debate and, given the scope and novelty of this legislation and the significant number of stakeholders involved, it could face significant amendments. The regulation will come into force 24 months after its adoption, although some provisions could apply sooner.

Legal framework governing AI in Canada

Canada does not have a regulatory regime that deals expressly with AI. Instead, AI systems in Canada are regulated by general privacy, technology and human rights legislation. Although Canada is not yet at the stage of developing a comprehensive AI regulatory regime, there is movement at both the federal and provincial levels to develop more responsive frameworks for regulating AI.

Ontario's Trustworthy Artificial Intelligence framework

Ontario's provincial government is in the early stages of developing a "Trustworthy Artificial Intelligence" framework to "support AI use that is accountable, safe, and rights based". The first step in the process is creating guidelines for the government's use of AI. On May 5, 2021, the provincial government launched a consultation process that seeks to solicit feedback on potential actions under the guiding principles of transparent, trustworthy and fair AI. Already, it is possible to draw parallels (and contrasts) between the Ontario government's proposed commitments regarding its use of AI and the European Commission's proposed regulatory framework.

  • Rights-based approach: The Ontario government's third commitment ("AI that serves all Ontarians") draws upon familiar rights-based language. In this case, the Ontario government has proposed to use "AI technologies that are rooted in individual rights".
  • Risk-based approach: The Ontario government's second commitment ("AI use Ontarians can trust") proposes putting rules and tools "in place to safely and securely apply algorithms to government programs and services based on risk". Integrating the concept of risk into the government's framework may help align the Canadian approach with Europe's and invite more nuanced and proportional regulation of AI systems.
  • Transparency: The Ontario government's first proposed commitment ("No AI in Secret") diverges most significantly from the European Commission's approach. This commitment proposes that "the use of AI by the government will always be transparent, with people knowing when, why, and how algorithms are used and what their rights are if harm occurs". By contrast, the European Commission adopted a more flexible approach to transparency. While the Proposal subjects some AI systems to strict transparency requirements (e.g., those that interact with humans or create "deep fakes"), low-risk AI systems are not subject to the same requirements.

Office of the Privacy Commissioner of Canada's recommendations for PIPEDA reform

In November 2020, the Office of the Privacy Commissioner of Canada (OPC) also made moves to address the use of AI systems in Canada. The OPC released recommendations for PIPEDA reform, informed by its consultation on AI, which would allow for a more responsive regulatory framework governing AI. The OPC made a number of proposals that would "help to reap the benefits of AI while upholding individuals' fundamental right to privacy". Several of those recommendations align with the EU's Proposal. For instance, the OPC's recommendation that PIPEDA require "organizations to design AI systems from their conception in a way that protects privacy and human rights", aligns with the Proposal's tiered risk-based approach and its requirements that AI providers develop robust data governance and risk management systems for the entire life-cycle of the AI system.

Canadian privacy reform and AI

The federal government's long-awaited privacy reform Bill C-11 and Québec's Bill 64 both address AI. Both bills introduce the concept "algorithmic transparency". In addition, Bill 64 also provides individuals with rights in relation to automated decision making and profiling. For more see our bulletins on Bill C-11 and Bill 64.

Takeaways for Canadian businesses

While the EU proposal is notable because it seeks to regulate an entire area of technology, rather than using a piecemeal approach to human rights, privacy and other areas of existing law, key themes are likely to be reflected in Canadian privacy law reform going forward. Our key takeaways are:

  • Organizations should prioritize data governance. Businesses that develop or use AI technology should proactively develop robust data governance frameworks. This will enable organizations to compete internationally, nimbly address future AI regulation in Canada, the U.S. and elsewhere, and mitigate current privacy, cybersecurity and data confidentiality risks.
  • Nuanced AI regulation is the future. Just as the European Commission rejected a blanket approach to regulating AI, current Canadian initiatives are exploring risk-based models of regulation.
  • Continued emphasis on transparency. Transparency will remain an important pillar in the AI regulatory regime in Canada and internationally. Organizations must be able to explain in plain language what their AI systems do and how they affect individuals' interests and rights.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.