On November 28, Minister of Innovation, Science, and Industry François-Philippe Champagne presented the House Standing Committee on Industry, Science and Technology (the "Committee"), which is studying Bill C-27, with the text of the Government's proposed amendments to the Artificial Intelligence and Data Act (AIDA). Bill C-27 successfully passed second reading and was subsequently referred to the Committee in April. Minister Champagne had provided the Committee with a description of the Government's proposed amendments last month, which we had covered in a previous bulletin. Consistent with the Minister's description, the proposed amendments would:

  • Introduce a new definition of "artificial intelligence system" (AI systems) and "machine learning model".
  • Set out seven initial classes of high-impact AI systems parameters for the Government to deem further classes of systems as high-impact systems.
  • Establish distinct obligations on different actors across the AI value chain and clarify that AIDA's obligation only applies once systems (or machine learning models) are placed on the market or put into use.
  • Require the establishment of accountability frameworks for those involved in the development and deployment of general-purpose or high-risk AI systems.
  • Provide for new powers for the new Artificial Intelligence and Data Commissioner.
  • Align AIDA with the European Union's Artificial Intelligence Act (EU AI Act).

The Minister's cover letter frames AIDA as essential legislation requiring urgent consideration. In the Government's view, "the cost of delay to Canadians would be significant" and would lead to AI systems being "unregulated in Canada for many more years", resulting in Canadians having difficulty "trusting that AI systems developed or used in Canada have been appropriately managed to ensure accountability and fairness". The implication is that Canada's existing laws of general application (human rights legislation, privacy laws, etc.) cannot adequately address risks such as bias in the age of AI.

If you have questions about the development of AIDA, or how your organization can prepare for AIDA, please contact the authors or your regular contact at Fasken.

Proposed Changes to AIDA

The proposed amendments to AIDA are substantial and cover the topics that the Government introduced in October (see a comparison document showing the new changes here). The Minister's letter also includes explanations of the objectives of the amendments, which we summarize below along with the wording of the amendments.

New Definitions

The proposed amendments include a new definition of "artificial intelligence systems", taken from the Organisation for Economic Co-operation and Development (OECD) definition of such systems. This definition is foundational to AIDA as it sets the boundaries of what systems AIDA will apply to:

artificial intelligence system means a technological system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions.

Relative to the initial definition, the OECD's definition aligns more closely with the current understanding and discourse on AI systems. The Government notes that the OECD definition includes the concept of "inference" as a distinguishing feature of AI systems, setting them apart from other computational systems. This focus on inference also eliminates the need to reference specific data processing techniques, such as machine learning.

Indeed, the initial definition of "artificial intelligence systems" in AIDA was broad, partly due to the need for a catch-all reference to data processing techniques. That definition encompassed all technological systems that "autonomously or partially autonomously" process data "through the use of a genetic algorithm, a neural network, machine learning or another technique". This broad definition could have inadvertently included systems not typically recognized as AI.

The proposed amendments would also introduce a definition of "machine learning model":

machine learning model means a digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns.

This definition extends the application of AIDA to include obligations related to machine learning models in addition to AI systems. Notably, while the new definition of AI systems mentions the use of a "model" in general terms, it does not reference the defined term "machine learning model". It is unclear whether the reference to a model in the definition of AI systems is meant to be broader than the defined term machine learning model, and whether some models used in AI systems might remain unregulated as opposed to machine learning models that are subject to obligations under the Act.

Initial List of High-Impact AI Systems

Many of AIDA's requirements apply exclusively to high-impact systems. In the initial text of AIDA, the definition of high-impact systems was left entirely to regulations, which led many stakeholders and parliamentarians to ask the Government for more clarity as to how it would define such systems.

Similar to the EU AI Act, the proposed amendments set out seven initial classes of AI systems that are deemed high-impact systems, along with a set of factors that the Government will consider in establishing new classes of high-impact systems. The initial classes are:

  1. Employment-related decisions: This class of high-impact systems focuses on AI systems used to make determinations in employment contexts, including recruitment, hiring, promotion, and termination. The Government's concern is that AI can perpetuate existing biases, affecting decisions like "who to advertise jobs to, how to rank applicants and who gets access to opportunities within an organization".
  2. Provision of services: This class includes AI systems used to decide whether to provide services to individuals, the type and cost of services, or the prioritization of services. The Government argues that AI systems used in these cases might exacerbate historical biases.
  3. Biometric information processing: This class captures AI systems that process biometric data for identifying individuals or assessing their behaviour or mental state. The Government's view is that these AI systems allow processing of biometric information at scale, allowing individual or group behaviour to be predicted and inferred. The Government argues that this carries a significant risk of inadvertent bias and psychological harm.
  4. Content moderation and prioritization on communications platforms (e.g., social media and search engines): This class targets AI systems used by online communications platforms such as social media services for content moderation and prioritization. The Government's concern for this class is the impact of those AI systems on freedom of expression, for example because of the possibility for the systems to discriminate along differing dialects, or to not understand the context of speech.
  5. Healthcare and emergency services: This class is concerned with AI applications in healthcare and emergency services, excluding certain medical devices regulated by the Food and Drugs Act. The focus is on ensuring these systems do not discriminate and adequately address health and safety concerns.
  6. Court or administrative body decision-making: This class involves AI systems used by courts or administrative bodies for decision-making with respect to individuals in legal or administrative proceedings. The goal is to prevent biases in these systems, which could have severe consequences for individuals' rights and access to justice.
  7. Law enforcement: This class covers AI systems used by peace officers for law enforcement purposes. Given the significant impact of policing on individuals and communities, the Government highlights that there is an interest in ensuring these systems are non-discriminatory, safe, and effective.

The Government notes that while these classes "are covered to some degree by other regulatory frameworks, such as the Personal Information Protection and Electronic Documents Act (PIPEDA) and the [Canadian Human Rights Act], care has been taken to ensure that AIDA will not duplicate requirements".

When adding new classes of high-impact systems, the Government (Governor-in-Council) must consider the severity and extent of potential adverse impacts, including on human rights and social harms. The Government must also consider whether a proposed class of high-impact systems is already adequately regulated by other laws. Given that the Government has acknowledged overlap between these initial classes and existing regulatory regimes, it remains to be seen in what cases the final factor will operate to limit prospective new classes of high-impact systems.

Distinct Obligations Across the AI Value Chain

In response to concerns that certain regulated parties might be held accountable for activities occurring outside of their operational scope or capabilities, the proposed amendments establish greater delineated responsibilities across the AI value chain. The amendments distinguish between developers of machine-learning models intended for high-impact use, developers of high-impact systems, persons who make high-impact systems available for use, and those who manage the operation of high-impact systems. The objective of these amendments is to create specific requirements tailored to the particular challenges for each of these stakeholders. The nature of the requirements variously include identifying, assessing, and mitigating risks of harm and biased output (including related efficacy testing obligations), with specific obligations tailored to the various stages of AI system development and implementation. For example, system operators will be required to implement a means for receiving feedback on the system's performance, as well as incident response and reporting requirements.

General-Purpose AI Systems

The proposed amendments introduce the definition of a "general-purpose system" which is distinct from the high-impact systems, discussed above (though the amendments note that an AI system can be both a general-purpose system and a high-impact system). It is defined as an AI system that is designed for use, or to be adapted for use, in many fields and for many purposes, including fields, purposes, and activities not contemplated during the system's development. This is a broad definition, intended to capture systems with a broad range of prospective uses, some of which are not yet fully understood. The proposed amendments introduce several new requirements on developers before a general-purpose system can be brought to market, which include assessing potential adverse consequences, taking measures to assess and mitigate risk (and test the efficacy of those mitigation measures), enabling human oversight, reporting serious incidents, and record keeping requirements.

The Minister's cover letter explains that the concern with general-purpose generative AI systems is their ability to produce undetected synthetic media – raising serious concerns about the proliferation of "deep-fakes". Accordingly, the proposed amendments will require that organizations make best efforts to ensure that the output of generative AI can be detected by users and include a plain language description of the general-purpose system to maintain transparency.

Accountability Frameworks

The proposed amendments will explicitly require accountability frameworks. Accountability frameworks are meant to ensure that organizations involved in the development and deployment of both general-purpose and high-risk AI systems are accountable for their risk management practices. These frameworks must, in accordance with regulations, include:

  • a description of the roles and responsibilities and reporting structure for all personnel who contribute to making the AI system available or who contribute to the management of its operations;
  • policies and procedures respecting the management of risks relating to the AI system;
  • policies and procedures respecting the data used by the AI system;
  • a description of the training that the personnel referred to above must receive in relation to the AI system and the training materials they are to be provided with;
  • if the person establishing and maintaining the framework manages the operations of the AI system, policies and procedures on how the personnel referred to above are to advise the person of any use of the AI system that results, directly or indirectly, in serious harm or of any mitigation measures that are not effective in mitigating risks of serious harm; and
  • anything that is prescribed by regulation.

New Powers for the Artificial Intelligence and Data Commissioner

The initial draft of AIDA vested significant power for administration and oversight of AIDA with the Minister and a senior official within the department of Innovation, Science and Economic Development Canada (ISED) who is designed as the Artificial Intelligence and Data Commissioner. Stakeholders expressed concern that this would create a conflict of interest between enforcement activities and the economic development priorities of ISED, particularly since the Commissioner's role was not well defined.

The proposed amendments shift certain powers that were previously attributed to the Minister to the Commissioner. The Commissioner's functions and responsibilities are now entrenched in the statute and will include the power to:

  • compel an organization to produce its accountability framework, allowing the Commissioner to inspect how organizations are complying with AIDA and to provide guidance or recommendations on corrective measures;
  • compel an organization to make available the assessments it has conducted in accordance with AIDA, so that the Commissioner can confirm whether they agree with the assessment;
  • conduct an audit, if the Commissioner has reasonable grounds to believe that an organization has contravened or is likely to contravene AIDA (which includes the authority to enter premises, access systems, copy data, and conduct testing of AI systems); and
  • disclose information to other regulators and to receive information from other regulators, including the Privacy Commissioner, the Human Rights Commission, the Commissioner of Competition, the Office of the Superintendent of Financial Institutions, the Financial Consumer Agency of Canada, the Financial Transactions and Reports Analysis Centre of Canada, the Minister of Health, and the Minister of Transport.

Together, these amendments are intended to enable the Commissioner to carry out key statutory functions independently of the Minister, and to have the Commissioner serve as a "central hub" on AI regulation. Ultimately, only the Minister would retain the powers to issue orders under AIDA.

Alignment with the EU Artificial Intelligence Act (EU AI Act)

Since AIDA was first tabled, international consensus on a shared understanding of AI's potential and its challenges has grown. In response to concerns raised by business stakeholders, the proposed amendments seek to alleviate the risks posed by misalignment of key definitions and standards across jurisdictions.

The proposed amendments encompass concepts on which greater agreement has now been reached in other jurisdictions and in multinational institutions:

  • A new definition of "artificial intelligence system" to align with the OECD definition (see discussion above).
  • Clarification to the scope of AIDA, such that obligations would apply only once AI systems are placed on the market or put into use in the course of international or interprovincial trade. In alignment with the EU AI Act, compliance would not need to be demonstrated during research and development phases.
  • Recognition that AI systems can be modified by organizations that did not create the original system and that the modifying party should also be subject to compliance obligations, similar to the EU AI Act. The proposed amendments will extend obligations to general-purpose and high-risk AI systems that have been substantially modified (in circumstances where the change is so substantial that it alters the ways in which the system would meet its obligations under AIDA).
  • More robust accountability frameworks as discussed above. Clarification of roles and responsibility within organizations brings AIDA into greater alignment with the EU AI Act, which requires a comparable quality management system.

These amendments will bring AIDA into closer alignment with the EU AI Act and will promote interoperability between these legal frameworks.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.