Canada's Office of the Privacy Commissioner (OPC) just released "A Regulatory Framework for AI: Recommendations for PIPEDA Reform", after months of consultation. Since January 2020, the OPC has been consulting on a proposal for ensuring the appropriate regulation of AI in the Personal Information Protection and Electronic Documents Act (PIPEDA). This report comes the day before the federal government announced a new bill to modernize PIPEDA and create stronger privacy enforcement mechanisms through an enforcement tribunal.
Highlights of the OPC's Report
The OPC's report tables three notable recommendations. They are:
Recommendation #1: The OPC acknowledges that AI is highlighting some of the practical challenges with a consent-based model in Canada. The OPC is now prepared to recommend exceptions to consent for i) the use of personal information for research and statistical purposes, ii) compatible purposes, and iii) legitimate commercial interests. Recognizing these grounds as exemptions to consent is significant for Canadian innovation and aligns well with many trends in data protection that we see around the globe.
Although clarification is needed to around exactly what parameters would justify the use of these exceptions, the OPC does recommend three safeguards:
- Privacy Impact Assessment: Requires a PIA to ensure legal compliance is assessed, and that risks to privacy are identified and mitigated
- Balancing Test: Test to balance i) the purpose, ii) the necessity and proportionality of the measure, and iii) consideration of the interests and rights of the individual
- De-identification: De-identified information for research or statistical purposes, and to the extent possible for legitimate commercial interests
Recommendation #2: The OPC relates to the use of Automated Decision-Making systems. To reduce the privacy risks associated with these systems, the OPC recommends that individuals be provided with two explicit rights:
Right to Meaningful Explanation: This right would allow individuals to understand the nature and elements of the decision to which they are being subject or the rules that define the processing and the decision's principal characteristics. The OPC also indicated that the definition would also be similar to those found in the GDPR that refers to providing individuals with "meaningful information about the logic involved" in decisions.
Right to Contest: The OPC recommends that this right includes both the ability for an individual to express their point of view to a human intervener and a right to withdraw consent to the use of automated decision-making.
Recommendation #3: The final recommendation made the OPC pertains to the means necessary to incentivize and enforce compliance. Accordingly, the OPC submits that PIPEDA should incorporate a right that "would mandate demonstrable accountability for all processing of personal information" conducted by a company. This includes requiring organizations to design for privacy and human rights prior to and during all phases of collection and processing and a requirement for organizations to log and trace the collection and use of personal information. Unsurprisingly, the document also calls for increased powers to the OPC that would include greater authority to issue binding orders and impose financial penalties.
Overall, the direction taken by the OPC addresses several important regulatory gaps that exist as a result of emerging technologies. These are undoubtedly important steps in the right direction. However, in our view three important issues still need to be considered:
First, the issue of regulating AI may be broader than the scope of the Privacy Act, PIPEDA, and the Access to Information Act. Regulating AI is an important discussion that perhaps should not be limited to these three statutes but done in the broader Government of Canada legislative context and involve other actors such as the Competition Commissioner, for example. We also should not forget the reference to the creation of a Canadian Data Commissioner in the December 2019 Mandate Letter to the Minister of Innovation, Science and Economic Development which called on the Minister to "create new regulations for large digital companies to better protect people's personal data and encourage greater competition in the digital marketplace. A newly created Data Commissioner will oversee those regulation." Discussion as to whether the Privacy Commissioner is best suited to enforce and oversee broad AI regulation is a timely and important question.
Second, the OPC's paper does not reflect its commitment to working in collaboration with cross-sectoral stakeholders, including industry, when it comes to developing guidelines and codes of practice. There is no mention in the recommendations of publishing guidelines that would include co-created codes of practice with industry despite this being legislated in a number of jurisdictions. For example, Ireland's Data Protection Act has a provision instructing the Data Commissioner to "encourage trade associations and other bodies representing categories of data controllers to prepare codes of practice with those dealing with personal data." [Art. 14(a)(2)]. The GDPR also contains similar rules involving co-regulation of codes of conducts [Art. 40–43]. This collaborative approach will help build trust and establish collaboration rather than having the OPC as the sole discreet authority responsible to regulate AI.
Third, building on international best practices, we note a gap in the document when it comes to "risk-based approaches," in developing AI frameworks; i.e. lower-risk data uses should typically require fewer, if any, controls, whereas high-risk applications should require more detailed controls. This risk-based approach is consistent with proposed frameworks now being developed in the US and the EU with regards to emerging uses of data, such as artificial intelligence. 1 AI systems are nuanced, complicated and need to be understood in context, regulations should be flexible and based on a diligent assessment of risk. In our view, a proposal to regulate AI that is not grounded in a risk-based approach will not promote an effective balance between diligence and innovation at this time of AI discovery, development, and operationalization.
What is clear is that responsible innovation is becoming a key Canadian value and competitive advantage. This means that now is the time for organizations to proactively build the mechanisms to support demonstrable accountability and show the due diligence that was taken before implementing the AI activity. The next phase in responsible AI is here, Canadian legislators will imminently call on companies to measurably demonstrate the ethical and legal rigor that went into their AI innovation.
1. See for example, EU White Paper (2019), Artificial Intelligence — A European approach to excellence and trust; US Draft Memorandum, Russel T. Vought, Guidance for Regulation of Artificial Intelligence Applications.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.