Introduction

On December 7, 2023, the Office of the Privacy Commissioner of Canada ("OPC") released an article with nine principles intended to guide developers, providers and organizations to properly navigate the development and use of generative artificial intelligence ("AI"). Generative AI, commonly known to the public through systems such as ChatGPT, is a rapidly evolving technology using machine learning to create content (such as text, computer code, images, video, or audio). Privacy concerns arises where the AI is trained on data sets that include personal information.

Currently, Canada lacks any AI-specific legislation. Under Bill C-27, which introduces sweeping privacy reforms, the proposed Artificial Intelligence and Data Act ("AIDA") hopes to fill that gap. Until its enactment, generative AI systems remain unregulated. While systems such as ChatGPT may seem innovative and useful, critics have decried it for its "web scraping" function, which extracts data from any source with a human-readable output. ChatGPT also raises the issue of using personal information without consent.

With these concerns in mind, let's take a look at the nine principles, as follows.

The Nine Principles

The below principles are targeted at the collection, use or disclosure of personal information through generative AI. These best practices are not exhaustive, and their applicability to each party will vary depending on its own procedures and roles.

  1. Legal Authority and Consent: Organizations should understand their legal authority, if any, to collect, use, disclose and delete personal information, and that consent should be valid and meaningful. It should also be ensured that information collected from third parties be obtained legally.
  2. Appropriate Purposes: Generative AI should only be used for purposes that a reasonable person would consider appropriate in the circumstances. The manner of collecting information must also be appropriate.
  3. Necessity and Proportionality: The use of generative AI should be fair, and more than simply potentially useful. The AI system must also be both valid and reliable.
  4. Openness: Developers and providers must allow individuals to understand the primary purpose and any secondary purpose for the use of the generative AI system. Transparency is key.
  5. Accountability: Organizations should be compliant with privacy legislation and capable of showing it. Outputs from AI systems should also be traceable and explainable, and if not, this should be made clear to the user.
  6. Individual Access: Individuals should be capable of accessing or correcting their own information collected by generative AI through set procedures.
  7. Limiting Collection, Use and Disclosure: Generative AI systems should not collect more information than what is necessary to fulfill its specified purpose. Retention schedules should be used to prevent keeping information when it is no longer required.
  8. Accuracy: Personal information that is used to train generative AI should be as accurate as possible. Users should also be informed of any limitations of the AI system in terms of accuracy.
  9. Safeguards: Developers, providers and organizations using generative AI should design and/or monitor them to defend against any inappropriate uses, such as the creation of illegal content or discriminatory treatment.

Takeaways and Recommendations

The new principles emphasize the need to protect vulnerable groups, including children and other minoritized communities. With one of the principles being safeguards, organizations are encouraged to use and create AI systems that do not create discriminatory results and perpetuate hateful rhetoric. They are also encouraged to use additional mitigation measures, such as increased monitoring. Specifically with children, who are more susceptible to bias and believing false information, AI systems must be developed and used in a way that balances its benefits and potential harms.

The OPC's article also notes that there may be potential "no-go zones" in the future, established by investigations, case law or policy. These refer to areas in which generative AI are forbidden to work. These may include the creation of "deep fakes" for malicious purposes, using AI-bots to coerce or trick human users into divulging personal information, or the creation of defamatory or false material about an individual. While the no-go zones have yet to be established in Canadian law, the fact that the OPC references these shows recognition of the controversial dialogue around AI. It also serves as a warning flag for a potential future without regulation.

Overall, the OPC's principles are a great starting point for organizations to perform their own self-assessments. They are both adaptable and easy to understand for highly technical parties or lay persons. However, the new principles are not legally binding, and whether users and developers will comply remains unclear, as do the measures to be used to ensure compliance. Still, it is highly recommended that parties review the principles in tandem with their own privacy policies and procedures. Risking non-compliance is inadvisable, especially with the proposed penalties under Bill C-27, which include hefty fines. Organizations should, therefore, prioritize the incorporation of privacy-by-design and ethics-by-design concepts in their compliance frameworks. The principles offer a guiding hand to help key players adapt to changing technology and regulations. With AIDA on the horizon, it also remains to be seen if or how the new legislation will integrate the OPC's principles.

The author would like to acknowledge Torkin Manes' Articling Student Herman Wong for his invaluable contribution in drafting this bulletin.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.