The Canadian government recently introduced the Digital Charter Implementation Act, 2022 (C-27) (the Act), a bill designed to bolster Canada's privacy and data protection legal framework and regulate artificial intelligence (AI) systems. The Act expands upon its predecessor draft bill introduced in 2020 (C-11) and is comprised of three proposed statutes:

  • the Consumer Privacy Protection Act (CPPA), which would repeal and replace sections of the Personal Information Protection and Electronic Documents Act (PIPEDA);
  • the Personal Information Data Protection Tribunal Act (Tribunal Act), which would create an administrative tribunal that, among other things, can review decisions made by the Privacy Commissioner (the Commissioner) regarding violations of the CPPA; and
  • the Artificial Intelligence and Data Act (AIDA), which would regulate the design, development, sale, and operation of "AI systems" and related data processing.

The CPPA and Tribunal Act would tighten restraints on how organizations process personal information and give individuals greater insight into and choices regarding such processing. Those changes would in some ways bring Canadian privacy law more in line with stringent data protection regimes in other jurisdictions, such as the General Data Protection Regulation of the European Union (EU) and the United Kingdom. And the legislation would grant individuals a new private right of action for certain CPPA violations.

The AIDA would establish risk-based, Canada-wide requirements for AI systems and permit the federal government to prohibit the sale or operation of an AI system if there are "reasonable grounds to believe that the use of the system gives rise to a serious risk of imminent harm." At first blush, the legislation appears to be far less prescriptive than the proposed Artificial Intelligence Act (AIA) under consideration by the EU's co-legislators. However, the AIDA leaves many of the details of the requirements to the federal government to specify in future regulations, which might lead to a more burdensome regime.

Because the Act would have some extraterritorial effect, US and other non-Canadian businesses should watch the bill's evolution through the legislative process and consider the steps they would need to take to comply.

Consumer Privacy Protection Act and Personal Information Data Protection Tribunal Act

Scope

The CPPA would have the same scope of application as PIPEDA: it would apply to every organization that collects, uses, or discloses personal information in the course of commercial activities or personal information regarding employees or job applicants "in connection with the operation of a federal work, undertaking or business." Federal works, undertakings, or businesses within the legislative authority of the Parliament of Canada include:

  • telecommunications
  • broadcasting
  • interprovincial or international trucking, shipping, railways, or other transportation
  • aviation
  • banking
  • nuclear energy
  • activities related to maritime navigation and shipping
  • local businesses in Yukon, Nunavut, and the Territories (where all private-sector activity is within the federal government's jurisdiction).

The Commissioner has interpreted PIPEDA to apply to organizations outside Canada if such an organization's activities have a "real and substantial" connection to Canada. This standard for extraterritoriality would extend to organizations under the CPPA, which states, "for greater certainty," that its scope covers actions by organizations with respect to personal information "collected, used or disclosed interprovincially or internationally." In addition, the CPPA could be applied to personal information "collected, used or disclosed by an organization within a province." In that case, however, the federal cabinet (technically, the Governor in Council) could decide to exempt certain organizations, activities, or classes of activities already subject to substantially similar provincial legislation.

Enforcement

The CPPA would expand the powers currently available to the Commissioner under PIPEDA and authorize substantial administrative penalties for noncompliance. Under the CPPA, the Commissioner would be empowered to issue cease-and-desist orders; impose compliance agreements on organizations and, separately, require compliance with those agreements; and order public disclosure of the measures organizations take to correct noncompliance.

The Commissioner's decisions would be subject to limited review and potential alteration by the newly created Personal Information and Data Protection Tribunal (the Tribunal). The Tribunal's decisions would be subject to review only by the Federal Court, which, among other things, can invalidate administrative decisions for unfairness, unreasonableness, or unlawfulness.

Civil fines under the CPPA could reach the higher of $10 million (CAD) or three percent of the offending organization's global revenue. And, where an organization "knowingly contravenes" certain provisions of the CPPA, e.g., failure to report breaches of security safeguards or to retain information subject to a government access request, that organization could be criminally fined, on an indictment, up to the higher of $25 million (CAD) or five percent of the global revenue.

Private Right of Action

The CPPA would grant individuals a new private right of action against an organization for loss or injury based on a non-appealable finding by the Commissioner, or a ruling by the Tribunal (upon appeal of a finding by the Commissioner), that the organization has violated the CPPA. To obtain financial relief for damages suffered as a result of the CPPA offense(s) for which an organization has been convicted, an individual could bring an action in the Federal Court or a provincial superior court.

Notice and Choice

Similar to PIPEDA, the CPPA would require organizations, prior to collection of personal information, to provide the subject(s) of the information with notice of the collection and of how the personal information will be used, how it may be disclosed to others, and what the purposes of those uses and disclosures are. The notice should be written "in plain language that an individual to whom the organization's activities are directed would reasonably be expected to understand." Organizations would be required to identify the names or types of the third parties to whom personal information will be disclosed.

The CPPA is also like PIPEDA in that it generally would require that individuals consent to the use and disclosure of their personal information, with specified exceptions. In addition to the PIPEDA exceptions, the CPPA would allow collection, use, or disclosure of personal information without consent for: (1) certain business activities (e.g., an activity required to provide products or services to an individual); (2) a legitimate interest "that outweighs any potential adverse effect on the individual"; (3) certain public interest purposes; (4) transfers of personal information to service providers (which is discussed further below); and (5) de-identification of personal information. However, express consent would still be required for business activities and activities related to legitimate interests if those activities are for the "the purpose of influencing the individual's behavior" or would not be expected by a reasonable person.

De-Identification

For personal information from which identifying elements are removed, the CPPA would distinguish "de-identified information" from "anonymized information." Under the CPPA, "de-identified information" is information that does not directly identify an individual but potentially could in combination with other information. In contrast, anonymized information cannot identify any person directly or indirectly—due to the irreversible and permanent modification of such information. The CPPA emphasizes, "for greater certainty," that anonymized information (but not de-identified information) is exempt from its restrictions.

Transparency Around Algorithms

In addition to the rights that would be granted to individuals under the AIDA (discussed in greater detail below), the CPPA would give individuals certain rights regarding the use of their personal information for automated decision-making. In particular, individuals would have a right to an explanation about any prediction, recommendation, or decision made by an automated decision system using their personal information that could impact them significantly. The explanation must include the type and source of the personal information used to make the prediction, recommendation, or decision, and the reasons or principal factors leading to the prediction, recommendation, or decision. By contrast, the GDPR empowers individuals with control over whether organizations can make decisions at all "based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

Security Safeguards; Accountability and Transparency Program

The CPPA would require regulated organizations to put in place physical, organizational, and technological safeguards to help ensure that personal information is protected against loss or theft and against any unauthorized access, disclosure, copying, use, or modification. These safeguards would need to be proportionate to the sensitivity of the information and include "reasonable measures to authenticate the identity of the individual to whom the personal information relates." For example, a credit reporting agency, after receiving information about a purported John Smith, would need to be reasonably sure that negative information is associated with the right John Smith before including it in the agency's record for Mr. Smith.

The CPPA would afford individuals greater insight into and the potential to affect the handling of their personal data by requiring each regulated organization to implement a privacy management program with mechanisms for addressing individuals' requests for information and complaints. The Commissioner would have authority to inspect organizations' policies, practices, and procedures and to recommend corrective measures.

Transfers of Personal Information to Service Providers

As under PIPEDA, organizations that transfer personal information to their outsourced service providers would be obligated under the CPPA to require the service providers to commit (whether contractually or otherwise) to protect the information with security and other safeguards at a level equivalent to what the CPPA requires the organization to maintain. Service providers also would have to notify the relevant organization in the event of any breach of the security safeguards. Otherwise, the CPPA would not regulate service providers with respect to personal information received from an organization, provided the service provider limits its use of such information to the purposes for which it was received.

Disposal of Personal Information

Applying the general privacy principle of personal data minimization, the CPPA would require, upon written request and subject to certain exceptions, an organization to dispose of personal information under its control as soon as feasible if:

  • the information was collected, used, or disclosed in contravention of the CPPA;
  • the individual has withdrawn his or her consent to the collection, use, or disclosure of their information; or
  • the information is no longer necessary to provide a product or service requested by the individual.

Upon disposing of any personal information, an organization would need to notify any service provider to which it has transferred the information that the service provider also must dispose of the personal information.

Because certain types of algorithms incorporate the data with which they are developed, an individual's request for disposal of personal information potentially could require an organization to destroy the algorithm and start over.

Artificial Intelligence and Data Act

Unlike the CPPA and Tribunal Act, the AIDA does not build upon existing Canadian law. Instead, it would create an entirely new regime for Canada's international and interprovincial trade and commerce in AI systems.

Scope

If passed, the AIDA would govern private-sector "regulated activities" regarding AI systems.

An AI system is defined as "a technological system that autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or prediction." The similar Organisation for Economic Cooperation and Development (OECD) definition does not expressly capture content-generation systems, and it only reaches systems making decisions, recommendations, or predictions that influence real or virtual environments. It is unclear whether these and other distinctions in wording would result in material differences in scope.

A "regulated activity" is broadly defined as a wide range of activities related to AI development and use, including "designing, developing or making available for use an AI system or managing its operations" as well as "processing or making available for use any data relating to human activities for the purpose of designing, developing or using an AI system." The legislation would affect regulated activities in interprovincial and international trade and commerce, leaving open the possibility of additional intraprovincial regulation of AI by the provinces.

The AIDA would also have some extraterritorial application although how much will be determined through future regulations. As such, it would be prudent for multinationals, with global AI systems comprised of components designed, developed, managed, or used in Canada, to start familiarizing themselves with the proposed framework.

High-Impact Systems

Like the EU's proposed AIA, the AIDA follows a risk-based approach. However, the AIDA would be far less burdensome for even high-risk systems—at least pending elaboration by future regulations. Perhaps as a result, the AIDA would divide AI systems into only two categories, high-impact and not high-impact, instead of the four tiers in the proposed AIA (unacceptable risk, high risk, limited (or "transparency") risk, and minimal or no risk).

For AI systems classified as "high-impact systems" pursuant to criteria established in future regulations, the person responsible would have to:

  • publish on a public-facing website a plain-language description of the system, including explanations of:
    • how the system is intended to be used;
    • the types of content that it is intended to generate and the decisions, recommendations, or predictions that it is intended to make;
    • the mitigation measures set up as part of required risk-management; and
    • any other information prescribed by regulation;
  • establish and monitor measures to identify, assess, and mitigate risks of unlawful discrimination and other physical, psychological, property, or economic harms that could result from the use of such system; and
  • notify the government of any "material harm" likely to result from its use.

These obligations would be on top of the anonymized data management and recordkeeping requirements applicable to all AI systems.

Persons Responsible

The AIDA would impose a number of obligations on those legal persons who design, develop, or make available for use an AI system or manage its operation. These "responsible persons" would be required to: (1) establish measures to manage anonymized data; (2) assess whether their systems qualify as high-impact; and (3) maintain general records describing their compliance measures and supporting their impact assessment.

Authority

The AIDA creates a range of new order-making powers and audit rights for the designated government Minister. The Minister could—in addition to significant recordkeeping, audit, publication, and disclosure powers discussed above—order any person responsible for a high-impact system to cease using it or making it available for use if there are reasonable grounds to believe that using the system creates a serious risk of imminent harm.

If passed, the AIDA would also establish the Office of the Artificial Intelligence and Data Commissioner.

Enforcement

Breaches of the AIDA would be a civil violation but, for certain provisions, could be either a civil violation or a criminal offense. For civil violations, the AIDA would authorize regulations to establish an administrative monetary penalty regime, the stated purpose of which is to "promote compliance" and "not to punish."

An organization acting in contravention of any AIDA requirements, or obstructing or providing false or misleading information during an audit or investigation, could face a criminal fine of up to the greater of $10 million (CAD) and three percent of its global revenues, whereas the court would fine a convicted individual an amount at the court's discretion.

Three criminal offenses would bear even steeper potential penalties:

  • possessing or using personal information in any stage of AI development, or in operating or providing AI systems, knowing or believing that the information was obtained unlawfully;
  • knowingly or recklessly making available an AI system "likely to cause serious physical or psychological harm to an individual or substantial damage to an individual's property" and which causes such harm or damage; and
  • intending "to defraud the public and to cause substantial economic loss to an individual," making an AI system available for use, which causes that loss.

An organization that commits one of these three offenses could be fined up to the greater of $25 million (CAD) and five percent of its global revenues. Convicted individuals would be fined an amount in the court's discretion, imprisoned for up to five years less a day, or both.

Looking Forward and Around the Globe

While it is difficult to predict whether the Act will be adopted, its proposed privacy statutes already are facing some of the same criticism levied at its similar predecessor bill—C-11—which died when the Canadian Parliament was dissolved for last September's election. Nonetheless, the Act represents Canada's contribution to the ongoing movement around the world to strengthen privacy laws and create new regulatory regimes to govern AI.

For other examples of this trend, in addition to the EU's proposed AIA, the UK government is seeking comment on its proposed AI regulatory framework and plans to introduce its AI-governance strategy late this year. In January, the Chinese Cyberspace Administration adopted its Internet Information Service Algorithmic Recommendation Management Provisions and is finalizing its regulation of algorithmically created content, including virtual reality, text generation, text-to-speech, and "deep fakes." Brazil, too, is crafting a law regulating AI.

In the United States, the leading congressional privacy law proposals contain algorithmic-governance provisions roughly comparable to the AIDA's. The Federal Trade Commission is planning a rulemaking "to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination." And the White House Office of Science and Technology Policy is formulating an "AI Bill of Rights."

State and local governments also have begun to regulate algorithmic decision-making more stringently. Illinois requires employers that vet video interviews with AI systems to notify job applicants and obtain their consent. New York City recently adopted a law subjecting automated employment decision tools to an annual "bias audit" from an independent auditor.

In this time of rapid change, businesses and other organizations should evaluate their current activities and global compliance programs to see whether they still suffice, particularly for new laws with extraterritorial application.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.