This fall, the House of Commons Standing Committee on Industry and Technology (INDU) is expected to begin an extensive study of Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 would repeal the current federal private sector privacy law, Part I of PIPEDA, and would enact the Consumer Privacy Protection Act (CPPA), the Personal Information and Data Protection Tribunal Act (PIDPTA), and the Artificial Intelligence and Data Act (AIDA).

Among the most talked about aspects of Bill C-27 is AIDA, which would have considerable impacts on the deployment of AI systems in Canada. As presently drafted, AIDA leaves a number of important aspects of its regime unspecified in the Bill, which may only be specified later through regulations.

Innovation, Science and Economic Development Canada (ISED) will be the department that is primarily responsible for administering AIDA. Notably, ISED recently announced that it is developing a voluntary code of practice to ensure developers and users avoid harmful effects of AI systems, build trust, and align with forthcoming regulations under AIDA if passed.

Understanding ISED's proposed generative AI code

The code aims to guide the development, deployment and use of generative AI systems. Government consultation on the code was announced in August, with the consultation period ending on September 14, 2023.

The proposed key elements of the code would include:

  • Safety: The focus on safety throughout an AI system's lifecycle is paramount. Developers should identify and prevent malicious uses of AI systems (e.g., impersonation), while being transparent with users about system capabilities and limitations. Safety considerations should extend to addressing harmful and/or inappropriate uses of AI systems (e.g., using AI for medical advice) by providing clear information to users.
  • Fairness and Equity: AI systems have the potential to perpetuate biases, so developers should ensure that generative AI systems are trained on representative data and produce unbiased outputs. To this end, developers should assess and curate datasets, and implement measures to mitigate biased output.
  • Transparency: Generative AI systems can be hard to explain, so developers should create mechanisms to detect AI-generated content and meaningful explanations of system processes. Organizations should clearly identify to users that they are using an AI system, especially when users could mistake an AI system for a human.
  • Human Oversight and Monitoring: AI systems should have adequate human oversight and mechanisms to identify and report adverse impacts. Human oversight should be present in deployment and operations, and mechanisms for identifying and addressing adverse impacts after deployment should be implemented.
  • Validity and Robustness: Developers should ensure that AI systems work as intended across contexts, and are resilient against misuse. Developers should use rigorous testing methods, including adversarial testing, to measure performance and identify vulnerabilities. Developers should employ appropriate cybersecurity measures to prevent adversarial attacks.
  • Accountability: Generative AI systems have complex risk profiles, necessitating comprehensive risk management processes. AI systems should be subject to internal and external audits, and organizations should clearly define the roles and responsibilities of personnel. Staff should be trained in risk management practices.

What is missing from the voluntary code of practice?

Interestingly, privacy is not addressed in the voluntary code. Nonetheless, data protection and privacy authorities around the world including the Office of the Privacy Commissioner of Canada (OPC) have turned their attention to the emerging privacy risks that could accompany more extensive development use and deployment of AI systems.

The G7 privacy authorities recently released a joint statement on the use of generative artificial intelligence that flagged for stakeholders and policy makers some key issues that AI may present from a privacy perspective. G7 privacy regulators are recommending privacy by design be embedded in the design, conception, operation, and management of generative artificial intelligence.

In Canada, the OPC jointly with provincial privacy authorities recently announced its investigation into prominent AI firm OpenAI, developer of ChatGPT. It remains to be seen whether the code of practice will meaningfully inform how AI is deployed in Canada, with looming legislation and rapid innovation simultaneously influencing the role of AI in today's economy.

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.