Innovation, Science and Economic Development Canada ("ISED") recently published a Companion Document for the Government of Canada's proposed Artificial Intelligence and Data Act ("AIDA"). AIDA is currently working its way through Parliament as part of Bill C-27, which is undergoing second reading in the House of Commons (click here for our summary of Bill C-27).

For organizations who are or may be participating in the artificial intelligence ("AI") industry or otherwise engaging with AI systems, the Companion Document provides further insight into the government's regulatory intentions and outlines a consultation process to allow for stakeholder input in developing regulations under AIDA.

The Rationale for AIDA

The Companion Document sets out to reassure AI stakeholders about the government's regulatory intentions in two main ways. First, it states that the government recognizes public concerns about the risks and potential harms of AI to society, and therefore aims to reassure the public that it has a plan to address the impact of AI. Second, it acknowledges that AI industry participants are concerned about regulatory uncertainty and the potential impact of developing laws on the advancement of AI, and thus aims to reassure those participants that the government intends to make the AIDA framework agile and adaptable to evolving technology.

Consultation Process & Timelines

Much of AIDA's impact, including its scope and key requirements, would be elaborated in its regulations.

Following Royal Assent of Bill C-27, the government plans to undertake a two-year consultation and development process for AIDA's regulations. The expected timeline for the development of initial regulations is as follows:

  • consultation on regulations (6 months)
  • development of draft regulations (12 months)
  • consultation on draft regulations (3 months)
  • coming into force of initial set of regulations (3 months)

During the consultation phases, the government intends to collaborate with and solicit feedback from industry, academia, civil society, and Canadian communities. Stakeholders will have an opportunity to provide input on areas such as:

  • the types of systems that should be considered and regulated as high impact;
  • the types of standards and certifications that should apply to high-impact AI systems;
  • enforcement mechanisms, such as an administrative monetary penalty scheme;
  • the role of the AI and Data Commissioner; and
  • the establishment of an advisory committee.

AIDA also contemplates future regulations that may elaborate on the requirements under AIDA, such as those relating to the anonymization of data for use in AI systems and reporting requirements in the event that an AI system causes "material harm." As part of the regulatory development process, interested groups and individuals will be able to provide comments on these topics.

How AIDA Would Work: Focusing on High-Impact Systems

The Companion Document summarizes how AIDA would regulate the design, development, and use of AI systems, with a focus on mitigating the risks of harm and bias in the use of "high impact" AI systems. When determining which AI systems are high impact, the government considers the following to be among the key factors to examine:

  • evidence of risk of harm to health and safety, or risk of adverse impact on human rights;
  • severity of potential harms;
  • scale of use;
  • ability to opt out from the system; and
  • degree to which risks are regulated under other laws.

The Companion Document lists examples of the types of technologies that are of interest to the government in terms of their potential impact, including: screening systems that impact access to services or employment, biometric systems used for identification and inference, applications like AI-powered online content recommendation systems, and applications that make critical decisions based on information collected from sensors, such as autonomous vehicles or health sector technology used for triaging.

However, one technology that may not fall within the scope of AIDA is open source software, or open access AI systems. For example, if a researcher publishes a model or tool as open source software, which could help others develop AI systems, this does not constitute a complete AI system. In this case, the distribution of open source software is not considered "making available for use" under AIDA.

The Companion Document reiterates that obligations related to high-impact AI systems will be set out in the regulations, but does provide some clarity regarding the broad principles that will govern these regulations. The principles align with international standards regarding the governance of AI systems,1 and include meaningful human oversight and monitoring, transparency and accountability, fairness and equity, safety, and validity and robustness.

The Companion Document also provides a variety of examples to clarify the scope of the types of activities that will be regulated by AIDA. Notably, "regulated activities" will not include research or the development of methodologies.

Oversight and Enforcement

The Companion Document provides details about the nature and progression of enforcement under AIDA. The government's immediate enforcement focus would be on establishing guidelines and education in order to help businesses to come into compliance during the "initial years after it comes into force." Industry would have time to adjust to the new framework before facing the prospect of penalties and sanctions.

The Minister of Innovation, Science, and Industry would be responsible for enforcement of all parts of AIDA that do not involve prosecutable regulatory or criminal offences, and would be supported by the new AI and Data Commissioner in carrying out these responsibilities. The Minister would have significant oversight powers, including the power to order the cessation of use of a system where there is a risk of imminent harm.

The Companion Document describes how the three different enforcement mechanisms under AIDA will be used:

  1. AMPs could be applied in the case of clear violations where other attempts to encourage compliance have failed. The violations giving rise to, and the quantum of, AMPs will not be known until set by regulations.
  2. More serious cases of non-compliance with obligations under AIDA could be prosecuted as regulatory offences and give rise to significant monetary fines.
  3. True criminal offences will be used to punish AI-related activities that are not otherwise prosecutable under the Criminal Code and committed by someone who is aware of the harm they are causing, or are at risk of causing.

Takeaways and Next Steps

At the time of writing, Bill C-27 is in second reading but has not yet been referred to committee, and is likely to be subject to amendments as it makes its way through Parliament.

Even if AIDA becomes law, the Companion Document notes that it will not be until at least 2025 before regulations are developed and the law comes into force. This timeline is commensurate with the amount of work to be done to develop regulations and canvass stakeholder input in a rapidly evolving space. Given the prevalence of AI systems in many aspects of the Canadian economy, and that much of the core of AIDA as proposed has yet to be written, the consultation and regulation-making process set out in the Companion Document could have an important influence on innovation in Canada and its competitiveness in AI technologies.

Footnote

1. See our previous bulletin about the NIST AI Risk Management Framework.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.