Foreword
In recent years, advances in artificial intelligence1 (AI), particularly in generative AI,2 have been impressive. Whether we're talking about large language models capable of processing and generating human-like text, self-driving cars, or the facial biometric authentication now used to unlock most cell phones, AI systems occupy a significant place in every sphere of human life.
Technological advances unfortunately come with their share of problems, such as deepfakes, which contribute to misinformation, and discriminatory bias, which we analyzed in a previous article. The Superior Court of Quebec recently published a notice to the legal profession and the public, warning against the potential fabrication of legal authorities through the use of large language models. To counter the risks inherent in this emerging technology, several countries have announced their intention to legislate the design, use, availability and development of AI systems within their territory.
In this article, the authors focus on AI legal framework initiatives in Canada and Quebec.
Introduction
Some people talk about the regulation of artificial intelligence as something new; however, Canada and Quebec already have an applicable legislative framework. Existing substantive law, ethical rules and our Canadian and Quebec charters of rights and freedoms3 to name but a few, all apply to AI. In Quebec, for example, a fault committed through AI that causes damage will be subject to the civil liability regime already provided for in the Civil Code of Québec.4 A case of discrimination caused by AI will be subject to the Charters. And although the new Law 25, more fully detailed below, does not mention AI specifically, it does cover decisions made by an entirely automated process.
Nevertheless, all agree that a legislative framework aimed more specifically at AI would ensure greater transparency, fairness and security regarding these constantly evolving technologies.
Principles and recommendations
Without going into too much detail, we can recall that in March 2017, the Government of Canada launched the Pan-Canadian Artificial Intelligence Strategy, becoming the first country with a national AI strategy. As part of this strategy, Canada set itself the challenge of building one of the most robust national AI ecosystems in the world by 2030 by attracting the best talent, increasing cutting-edge research capacity and fostering the commercialization and adoption of responsible AI, including the establishment of three centres of excellence supporting AI research in Canada: the Alberta Machine Intelligence Institute in Edmonton, the Institut québécois d'intelligence artificielle in Montreal and the Vector Institute for Artificial Intelligence in Toronto.
In 2018, as part of the Montreal Declaration for a Responsible Development of Artificial Intelligence, an ethical framework consisting of 10 basic principles was set out. These principles include the well-being of all sentient beings, respect for autonomy, protection of privacy, social equity, diversity and inclusion, solidarity, sustainable development, prudence, and human responsibility.5 Of course these are well-known principles, already applicable in most legal systems, but which have been refined to meet the challenges of AI.
Then, on November 12, 2020, the Office of the Privacy Commissioner of Canada (OPC) issued recommendations for the regulation of artificial intelligence.6 Pointing out that uses of AI that are based on personal information can have serious consequences for privacy, the OPC made several recommendations, including the following:
- the requirement for AI systems developers to integrate protection of individual privacy into the design of algorithms and models;
- the establishment of a right to a meaningful explanation enabling individuals to understand the decisions made about them by an AI system, as well as ensuring that these explanations are based on accurate information and are not discriminatory or biased;
- the establishment of a right to contest automated decisions;
- the regulator's authority to require proof of the above.
Also, in its report published in September 2023, the OPC also makes it one of its strategic priorities to monitor the rapid advances in technology—particularly in AI and generative AI—in order to identify their impact on privacy.
Furthermore, in November 2021, UNESCO's 193 member states, including Canada, adopted the Recommendation on the Ethics of Artificial Intelligence, the very first global standard-setting instrument on the subject. It is said that this recommendation "will not only protect but also promote human rights and human dignity, and will be an ethical guiding compass and a global normative bedrock allowing to build strong respect for the rule of law in the digital world."7 This recommendation also recalls the need for "international and national policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole" and for "a human-centred AI. AI must be for the greater interest of the people, not the other way around."8
More recently, on April 19, 2023, in an open letter published in La Presse, a group of 75 people working in the field of AI, including researchers, academics, company CEOs and heads of organizations, made an urgent plea for a legislative framework to oversee the development of artificial intelligence, in order to guarantee transparency, accountability and ethics in its use, as well as the protection of privacy:
And should Canada be amongst the first countries to adopt its legislation, it will send a strong signal to businesses across the world that they can and should turn to Canada and to Canadian companies if they want to develop or procure trustworthy and responsible AI systems that uphold human rights and protect the well-being of its users.
At almost the same time, Quebec's Minister of the Economy, Innovation and Energy launched a non-partisan, transparent and inclusive collective reflection on the framework for regulating artificial intelligence in Quebec. Two months later, in June 2023, he also announced the "Confiance IA" initiative, a public-private industrial research consortium aimed at providing an ethical, reliable and responsible framework for the development of artificial intelligence in Quebec.
In spring and summer 2023, dozens of experts brought together by the Conseil de l'innovation du Québec (Quebec Innovation Council) examined six specific themes:
- the AI governance framework;
- the framework for public investment in AI in the research and private sectors;
- the framework for government use of AI;
- the impact of AI on work and the Quebec job market;
- other societal impacts of AI, notably on democracy, the environment, arts and culture; and
- Quebec's role in the international regulation of AI and as a leader in responsible AI development and deployment.
In October 2023, several reports on the state of play in these areas were published on the Conseil de l'innovation du Québec website.
Finally, on September 27, 2023, at the opening of the ALL IN event on artificial intelligence, the federal Minister of Innovation, Science and Industry presented the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.9 Companies that have signed10 or will voluntarily sign this code commit to supporting the continued development of a reliable and responsible AI ecosystem in Canada through six key principles: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness.
Federal regulation of AI
In June 2022 the federal government tabled a bill to regulate the design, development and use of AI systems in international and interprovincial trade and commerce by establishing common requirements, applicable across Canada.11 The Artificial Intelligence and Data Act (the "AIDA") is proposed as part of Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.
Currently being examined in committee by the House of Commons, the AIDA applies to all persons, including trusts, joint ventures, partnerships, unincorporated associations and any other legal entities, engaged in the following regulated activities:
- processing or making available for use any data relating to human activities for the purpose of designing, developing or using an AI system; and
- designing, developing or making available for use an AI system or managing its operations.12
An AI system is also defined as ‟a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions".13
Through the creation of new criminal law provisions, the AIDA will prohibit certain conduct in relation to AI systems that may cause serious harm to Canadians. This includes possessing and using illegally obtained personal information to design, develop, use or make available an AI system,14 as well as making AI systems whose use causes substantial economic loss to an individual available with the intent to defraud,15 which many experts in the field view favourably:
In addition to establishing that high-impact AI systems must meet obligations including with respect to safety, transparency, and human rights, AIDA would sanction conduct that causes serious harm to individuals or their interests. It provides the scaffolding for regulations to be developed in consultation with a broad range of stakeholders, defining specific concepts and requirements that will become the guardrails to developing and deploying AI.16
The AIDA will also address the risks of systemic bias in AI systems in the private sector.17
Thus, the AIDA will adopt a risk-based approach and create new obligations for "high-impact systems".18 Although the criteria for identifying such systems have yet to be defined in the regulations, a high-impact system will have to be the subject of measures designed to identify, assess, mitigate and control the risks of harm or biased results that its use could entail before it is made available.19
Violations are punishable by two types of sanctions: fines of up to C$10 to 25 million for corporate offenders, and prison sentences of up to five years less a day and/or discretionary fines for individual offenders.20
While Canada plays a leadership role in AI thanks to multiple initiatives in the field over the past few years, the AIDA "puts forward a legislative framework for AI that will be supported by regulations and standards, making it agile enough to adapt to new capabilities and applications of AI as it continues to evolve."21
Although there are some criticisms of the AIDA,22—lack of prior public consultation, concerns that the new requirements could slow advances in AI-reliant technologies, and lack of details about which technologies will be governed by it, to name just a few—most experts agree that AI needs to be regulated:
If the regulatory framework is fair, there's no reason why it should hold back innovation. Canada, Quebec and Montreal are AI hubs, and this hasn't stopped us from adopting the principles of the Montreal Declaration.23 [Translation]
Note, however, that this law is not scheduled to come into force until 2025, whereas in other parts of the world such as China, the United States and Europe, laws are already in force or about to be passed. Take, for example, the Artificial Intelligence Act, soon to be adopted by the European Union. We'll look at some of these regulations in an upcoming article.
Lastly, although the AIDA does not yet have the force of law, the Office of the Privacy Commissioner of Canada has investigative powers in matters of privacy and personal information that already apply to AI. For example, in April 2023, it launched an investigation into OpenAI, the company behind ChatGPT, following a complaint alleging that personal information had been collected, used and disclosed without consent. The investigation is still ongoing.
Quebec government regulation of AI
As for Quebec, although no specific AI legislation is in place, the Act to modernize legislative provisions as regards the protection of personal information24 (Law 25) set out new rules to protect the privacy of Quebec citizens as of September 2022. The Act also takes into account the impact of AI in this regard.
In particular, Law 25 changes the way public bodies and enterprises must manage personal information that will be processed by AI, although the law does not explicitly mention AI in its text. Some of its provisions impose new measures to ensure better protection of citizens' privacy while taking into account new technological realities such as AI.
For example, it provides new rights for people whose personal information is processed exclusively by an automated decision-making system (for example, an algorithm that calculates and grants a family allowance without any human intervention).25 As of September 2023, these individuals must be informed that their personal information is being processed automatically. At their request, these individuals may be informed of the personal information used to make the decision, the reasons and principal factors that led to the decision, as well as their right to have the personal information used to make the decision corrected. Such individuals must also be given the opportunity to submit observations to a member of the organization's staff who has the power to review the decision.
Law 25 also aims to better protect individuals against technologies allowing them to be identified, located or profiled. Profiling is broadly defined as "the collection and use of personal information to assess certain characteristics of a natural person, in particular for the purpose of analyzing that person's work performance, economic situation, health, personal preferences, interests or behaviour."26 As of September 2023, functions that allow the identification, location or profiling of individuals must be disabled by default on any device.
In addition, as of September 2022, the creation of a bank of biometric characteristics or measurements (voiceprint, fingerprints, DNA, etc.) by an enterprise or public body must be disclosed to the Commission d'accès à l'information no later than 60 days before it is brought into service.27
As with the AIDA, violators of Law 25 and its regulations are liable to severe penalties, with fines ranging from C$10 million to C$25 million for corporate offenders and from C$5,000 to C$100,000 for individual offenders.
Other requirements under Law 25 will come into force in September 2024.
Conclusion
Although much has been written about a legislative framework for AI, and everyone seems to agree that such a framework is necessary, there's still a long way to go before we see anything concrete.
In the meantime, it should be remembered that both Canadian and Quebec substantive law apply to AI.
It's also worth bearing in mind that Canada is not alone in wanting to introduce new AI rules, and it will be interesting to watch what happens elsewhere, especially in Europe, the USA and China.
Footnotes
1. In this article, "artificial intelligence" refers to a machine's ability to imitate or surpass the intelligent behaviour and activities of humans.
2. AI capable of creating content such as photos, text, videos and music.
3. Canadian Charter of Rights and Freedoms, Part I of the Constitution Act, 1982, enacted as Schedule B to the Canada Act 1982, 1982, c. 11; Charter of human rights and freedoms, CQLR, c. C-12.
4. CQLR c. CCQ-1991.
5. The Montreal Declaration for a Responsible Development of Artificial Intelligence, online: 1) Principle of well-being: AI must increase the well-being of all sentient beings; 2) Principle of respect for autonomy: AI must increase people's control over their lives and environment, not control them with unreliable information, propaganda, lies, or induce them to vote or see things in a predefined way; 3) Principle of privacy and intimacy: privacy and intimacy must be protected from the intrusion of AI systems; 4) Principle of solidarity: the development of AI must be compatible with the maintenance of bonds of solidarity between people and generations; 5) Principle of democratic participation: AI must satisfy the criteria of transparency, justification and accessibility; 6) Principle of equity: AI must contribute to a just and equitable society; 7) Principle of inclusion of diversity: AI must be compatible with the maintenance of social and cultural diversity; 8) Principle of prudence: those involved must anticipate the potential adverse consequences of AI; 9) Principle of responsibility: AI must not contribute to the disempowerment of human beings when a decision has to be made; 10) Principle of sustainable development: AI must ensure strong ecological sustainability.
6. The OPC has been calling for reform of the Personal Information Protection and Electronic Documents Act (PIPEDA) for a number of years, in response to the security challenges posed by emerging technologies.
7. "Ethics of Artificial Intelligence" (2023). UNESCO, online.
8. Ibid.
9. Government of Canada (September 2023). "Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems," Canada.ca, online.
10. There were 15 signatories to the code of conduct at the time of writing.
11. Section 4a) of the AIDA Bill.
12. Section 5(1) of the AIDA Bill.
13. Section 2 of the AIDA Bill.
14. Section 38 of the AIDA Bill.
15. Section 39 of the AIDA Bill.
16. Catherine Régis and Yoshua Bengio (April 19, 2023). "Il y a urgence à adopter la Loi sur l'intelligence artificielle et les données," La Presse, online.
17. See in particular section 8 of the AIDA Bill.
18. Section 5(1) of the AIDA Bill.
19. Section 8 of the AIDA Bill.
20. Section 40 of the AIDA Bill.
21. Supra, note 16.
22. See Teresa Scassa (August 16, 2023). "Canada's Draft AI Legislation Needs Important Revisions", Centre for International Governance Innovation, online.
23. Emmanuel Delacour (September 18, 2022). "La stratégie canadienne pour encadrer l'IA," CScience, online.
24. SQ 2021, c. 25.
25. Section 12.1 of Law 25 and section 65.2 of the Act respecting Access to documents held by public bodies and the Protection of personal information.
26. Sections 8.1 and 65.0.1 of the Act respecting Access to documents held by public bodies and the Protection of personal information.
27. Section 45 of the Act to establish a legal framework for information technology.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.