21 December 2023

ISED Minister Publishes Letter On Amendments To The Artificial Intelligence And Data Act



Cassels Brock & Blackwell LLP is a leading Canadian law firm focused on serving the advocacy, transaction and advisory needs of the country’s most dynamic business sectors. Learn more at
In recent months, the Government of Canada has prioritized legislative efforts to ensure the responsible development and use of artificial intelligence (AI) in Canadian society.
Canada Technology
To print this article, all you need is to be registered or login on

In recent months, the Government of Canada has prioritized legislative efforts to ensure the responsible development and use of artificial intelligence (AI) in Canadian society. For example, in October, the federal government launched the Consultation on Copyright in the Age of Generative Artificial Intelligence (the Consultation), which focuses on the use of intellectual property by AI systems.1

Alongside the Consultation, the federal government has also reviewed stakeholder feedback since the tabling of the Digital Charter Implementation Act (Bill C-27), which would enact the Artificial Intelligence and Data Act (AIDA) to regulate the development and use of AI systems.2 In response to the feedback, a number of amendments to AIDA have been proposed for ongoing consideration by the Standing Committee on Industry and Technology (INDU).

On November 28, the text of the proposed amendments to AIDA and accompanying rationale for each amendment were published in a letter from the Minister of Innovation, Science and Industry (the Minister) to the Chair of INDU (the Letter), providing more visibility into the direction of AI legislation in Canada.3

Below is a summary of the main amendments and how they may affect businesses and organizations.


Under the original text of AIDA, the majority of obligations imposed upon organizations were in relation to "high-impact" AI systems, but no definition was provided in the draft legislation. However, following the feedback process, a new definition has now been proposed.

The new definition of "high-impact system" would be based on whether the intended use of the AI system falls within one of the classes listed in a newly-created Schedule 2, which lists seven classes of systems. The schedule is intended to be modified as AI technology evolves, with any modifications requiring two main considerations: (i) the severity and extent of potential adverse impacts, including impacts on human rights and social harms and who may experience these impacts; and (ii) whether a proposed class of systems is already adequately regulated by other existing laws.

The seven classes of "high-impact" AI systems are:4

  • Class 1: the use of an AI system relating to determinations of employment, including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer, or termination;
  • Class 2: the use of an AI system relating to determinations of whether to provide services to an individual, the type or cost of services to be provided to an individual, or prioritization of services to be provided to individuals;
  • Class 3: the use of an AI system to process biometric information relating to the identification of an individual, other than in cases in which the biometric information is processed with individual's consent to authenticate their identity, or the assessment of individual's behaviour or state of mind;
  • Class 4: the use of an AI system relating to the moderation of content found on an online communications platform, including search engines and social media services, or the prioritization of presentation of such content;
  • Class 5: the use of an AI system relating to health care or emergency services, but excluding a use referred to in any of paragraphs (a) to (e) of the definition "device" in section 2 of the Food and Drugs Act that is in relation to humans;
  • Class 6: the use of an AI system by a court or administrative body in making a determination in respect of an individual who is a party to proceedings before the court or administrative body; and
  • Class 7: the use of an AI system to assist a "peace officer", as defined in section 2 of the Criminal Code, in the exercise and performance of their law enforcement powers, duties, and functions.

The Letter also clarified that AIDA does not regulate the use of AI systems by governments, except for systems intended for the sensitive public sector uses specified in Classes 6 and 7. However, the Minister believes that the regulations placed upon private sector organizations that commercially develop and manage the AI systems used by government institutions, as well as other obligations applicable to the public sector such as the Canadian Charter of Rights and Freedoms and the Directive on Automated Decision-Making, are sufficient to "ensure that these systems meet the highest standards."5


The AIDA amendments modify several important definitions and concepts in light of the emergence of international frameworks, which were not present when the original legislation was tabled.

Among the main amendments are:6

  • Modification of the definition of "AI systems" – the amended definition would be agnostic to specific techniques or data used in the development of the system, instead introducing the concept of "inference" to distinguish between AI systems and computational systems that are not considered to be AI. According to the Letter, this modification aligns with the new OECD definition of "AI systems";
  • Replacement of the term "regulated activities" with a new section 5.1 – this would clarify that AIDA obligations only apply once systems or machine learning models are placed on the market or put into use in the course of international or interprovincial trade. This means that compliance does not need to be demonstrated prior to putting a system on the market, which would include preliminary phases such as research and development. However, where a high-impact system incorporates a machine learning model, the model would be required to meet AIDA's standards, regardless of whether the model is separately placed on the market. The Letter states that this aligns AIDA with the application of the European Union's AI Act;
  • Modification of sections 8.1 and 10.2 regarding the application of AIDA to AI systems that have been substantially modified – the amended sections would place responsibility for maintaining AIDA compliance on the party in control of any substantial changes to an AI system that would alter the way the system meets its AIDA obligations. According to the Letter, this further aligns AIDA with the AI Act; and
  • Modification of section 12 regarding accountability frameworks – the amended section would explicitly require robust accountability frameworks, but such frameworks would be proportionate to the nature and size of an organization and the risks associated with its activities. The Letter states that this brings AIDA into greater alignment with the AI Act.


As the life of an AI system progresses, different parties become responsible for the system's operation. The AIDA amendments reflect this reality, addressing the main concern from organizations engaged in the development of AI systems that they should not be held accountable for post-deployment obligations that extend beyond their operational scope or capabilities.

Under the AIDA amendments, the concept of "persons responsible" would be replaced with distinct obligations based on each organization's role in relation to the system.7 Some requirements, considered to be foundational, would apply to all organizations in the value chain, such as the establishing measures to identify, assess, and mitigate risks of harm and biased output and record-keeping requirements. However, how these shared requirements would be applied to each organization in the value chain has not yet been proposed.

In addition, some requirements are also intended to require organizations within the value chain to work together. One notable example is requiring AI developers to develop a model card for machine learning models, which are short documents that explain the context in which the models are intended for use and information about how the performance of the models should be evaluated.8 The model card would then assist others along the value chain to meet their obligations. However, the amendments do not specifically address the consequences in cases where a model card is either incorrect or not followed.

There are also several amendments targeted towards greater assessment of the impacts of both intended and reasonably foreseeable uses, development of feedback mechanisms on the system's performance by users, and creation of incident response and reporting requirements.


The original text of AIDA did not address general-purpose AI systems. However, with the advent of general-purpose AI systems in our society since the introduction of AIDA, most notably with large language models such as ChatGPT, new concerns have arisen.

The amended AIDA would define "general-purpose system" as "an artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes, and activities not contemplated during the system's development."9 An AI system can be a general-purpose system and a high-impact system at the same time under the amended AIDA.

The proposed requirements for general-purpose systems follow the same principles as those found in sections 9 and 10 of AIDA pertaining to high-impact systems while recognizing that general-purpose systems are expected to have a broader range of uses. These requirements include assessing potential adverse impacts, taking measures to assess and mitigate risks, enabling human oversight, reporting serious incidents, and keeping relevant records.

The Letter also specifically addressed general-purpose generative systems, noting that the difficulty of detecting whether a generated output is created by a machine or a human can have "important consequences in terms of the spread of disinformation and the functioning of societal and democratic institutions."10

The AIDA amendments attempt to address the foregoing by requiring organizations building general-purpose systems with generative abilities to make best efforts to ensure that the outputs can be detected by humans, either unaided or with the assistance of free software (e.g., watermarking content). While the standard is currently set at "best efforts" to reflect the reality that the "technical feasibility of watermarking synthetic media is currently a work in progress", the Letter stated that these requirements pertaining to general-purpose generative systems will be further refined to ensure that they are aligned with the state of the art to comply with the principle of full transparency.

In addition, the Letter also noted that there is a growing risk of humans mistaking their interactions with an AI system for another human.11 Accordingly, the AIDA amendments would require AI systems to promptly advise the human user that they are communicating with an AI system in circumstances where it is reasonably foreseeable that a human interacting with an AI system might confuse the system for another human. This requirement would apply to all AI systems, including those that are neither high-impact nor general-purpose as defined under the amended AIDA.


The AIDA amendments also clarify the role of the newly created Artificial Intelligence and Data Commissioner (AIDC) and distinguishing it from the role of the Minister under AIDA. In doing so, the amendments also strengthen the powers of the AIDC, shifting the investigative powers previously allocated to the Minister to the AIDC.12

The AIDC would also be responsible for publishing an annual report, on a publicly available website, on the administration and enforcement of AIDA during the previous calendar year.13 This requirement would ensure transparency around the AIDC's work and provide an opportunity for stakeholders to continue contributing their input towards the development of AIDA.


The AIDA amendments and the Letter provide greater clarity regarding the regulatory landscape of AI in Canada. For example, the amendments have now defined the classification of "high-impact systems", which was previously a point of uncertainty for organizations that develop or operate AI systems.

In addition, the amendments and the Letter suggest that Canada's regulatory approach to AI favours alignment with international approaches from the EU and the OECD. As such, interested stakeholders should remain attentive to the regulatory developments around the world for a better understanding of the future amendments that may be made to AIDA.

Lastly, the amendments are responsive to the ongoing developments of AI, such as the addition of provisions related to "general-purpose systems" to address generative AI technology. As stated in the Letter, various provisions in AIDA are intended to be modified as the technology evolves, and stakeholders should expect more amendments to come as new concerns arise.


1. The deadline for the Consultation was extended to January 15, 2024. For more information about the Consultation, please see our Cassels Comment on the Consultation.

2. Please see our Cassels Comment on the Artificial Intelligence and Data Act for more information.

3. Canada. Innovation, Science and Economic Development Canada, Letter to the Chair of the Standing Committee on Industry and Technology on Bill C-27 (November 28, 2023).

4. Ibid at pp 2-5.

5. Ibid at p 6.

6. Ibid at pp 6-7.

7. Ibid at p 7.

8. Ibid at p 8.

9. Ibid at p 9.

10. Ibid at p 9.

11. Ibid at p 10.

12. Ibid at p 11.

13. Ibid at p 12.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More