As artificial intelligence ("AI") technologies rapidly advance they offer both exciting opportunities and unique challenges. Governments, regulators, consumer groups and AI ethicists worldwide are grappling with the challenge of promoting the positive impact of AI while addressing its potential risks.

Even Mira Murati, the Chief Technology Officer at OpenAI, which created ChatGPT, has publicly advocated for AI regulation.1 With such a prominent figure in the AI field endorsing the need for regulation, it is no surprise that AI regulation has become a major issue in the global regulatory agenda.

McMillan LLP lawyers from across the firm are actively staying abreast of AI developments. This bulletin summarizes recent developments in AI regulation, both in Canada and abroad.

Canada

  • AIDA Goes to Committee. Since last summer, Canada's Parliament has been considering the Artificial Intelligence and Data Act (the "AIDA") as a part of Bill C-27. The AIDA had been criticized for giving vast powers to the government (including fines of up to $25 million or 5% of gross global revenue), while leaving the specifics up to unpublished regulations. Nonetheless, legislators voted to send the bill to committee on April 24, bringing it one step closer to being passed as law.

This development follows the release of a companion policy to AIDA by Innovation, Science and Economic Development Canada ("ISED").2 The companion policy describes ISED's proposed two-year strategy for developing regulations, in which it would consult with stakeholders and attempt to align regulations with international standards. For a full summary on ISED's companion policy, please see McMillan's full bulletin on the topic here.

  • OPC investigating ChatGPT. In April 2023, the Office of the Privacy Commissioner (the "OPC") announced that they had opened an investigation into OpenAI, in response to a complaint alleging that personal information was collected, used and disclosed without consent.3 In the press release, the OPC said that AI technology would be a key priority for the office.

In addition, the OPC has recently released a two-part reference guide on the topic of algorithmic fairness, signaling its fluency with the technical aspects of AI systems.4

  • OSFI issues Guidance for AI Best Practices. In the financial services industry, the Office of the Superintendent of Financial Institutions ("OSFI") and the Global Risk Institute issued a joint report in April 2023 outlining best practices for AI risk management.5 This report defines the "EDGE" principles (Explainability, Data, Governance and Ethics) for responsible use of AI in financial institutions. OSFI is expected to release further guidelines in 2023 regarding the use of AI for public consultation. For a full summary of this report, please see McMillan's full bulletin on the topic here.
  • Federal Government Updates its Directive on Automated Decision Making. The Canadian Government updated its directive on Automated Decision Making on April 25, following a period of stakeholder engagement. The updates include an expanded scope and new measures for explanation, bias testing, data governance, GBA+, and peer review.6

United States

  • Warnings from the FTC. The Federal Trade Commission recently published a pair of blog posts signaling increased scrutiny on the use of AI. The posts contain warnings about AI's potential for fraud and deception, and caution against companies using overly broad claims of AI in their advertising.7
  • Policy Statement from USCO. The U.S. Copyright Office has provided guidance for copyright registration of works containing AI-generated content. The guidelines stipulate that individuals can only claim copyright protection for their own contributions to the work, with any AI-generated content that is more than minimal being excluded from any application.8
  • Collaboration with Europe on AI Innovations. At the start of 2023, the EU and United States governments signed an Administrative Arrangement on Artificial Intelligence, building upon the principles contained in the Declaration for the Future of the Internet.9 This agreement allows the EU and US to share resources and collaborate on AI research, with the goal not to mitigate the potential risks of AI, but instead to increase the use of emerging technologies to solve worldwide challenges, ranging from climate change, natural disasters, health and medicine, electric grid optimization to agriculture.
  • Hints at a Federal AI law? In April 2023, the Biden Administration announced it was seeking comments on potential accountability measures for AI technologies in the wake of the rise of ChatGPT.10 The National Telecommunications and Information Administration plans on creating a subsequent report, which will inform the Biden Administration's work in this area.

The US has also started to consider potential legislative steps towards the regulation of ChatGPT. Senator Chuck Schumer has circulated a new regulatory regime for AI. This represents the clearest sign to-date that the US plans on implementing AI regulation. While no legislation has been proposed, it is clear that discussions surrounding the regulation of AI and ChatGPT are prominent in Washington.

International Developments

  • EU Urged to Expand AI Act. The European Union has also anticipated the need for legislation governing the use of AI. The Artificial Intelligence Act was originally proposed in 2021 and is currently under discussion in the European Parliament.11 With the rise of ChatGPT, a group of prominent experts and institutional signatories recently released an open letter calling for an expansion of the AI Act to regulate forms of general purpose AI (including ChatGPT and AI image generators).12
  • United Kingdom Released White Paper. The UK recently published a white paper aimed at guiding the use of AI technology in the country, while also maintaining public trust in the technology.13 The paper is guided by five principles, namely fairness, transparency, safety, accountability and redress or contestability, and it seeks to ensure that any new regulations do not unduly restrict innovation. Under the proposed approach, existing regulators will develop context-specific approaches to regulating AI technology based on how it is used within their respective sectors. The aim is to encourage responsible innovation while ensuring that AI is developed and used in a way that benefits society as a whole.
  • EU Privacy Regulators Shine Spotlight on ChatGPT. Italy banned the use of ChatGPT in March 2023, becoming the first Western country to do so.14 Following this ban, and other regulator investigations in Germany, Spain, and France, the European Data Protection Board has launched a dedicated task force on ChatGPT.15
  • China. China's cyberspace regulator has unveiled draft regulations for generative AI services.16 The Chinese draft regulation requires that content generated must reflect the value of socialism and should not discriminate against people on bases such as race, ethnicity or gender.
  • Japan. Japan appears to be taking a different course of action with generative AI regulation. The Japan Times reported that the government is not considering the regulation of ChatGPT, instead indicating that they will explore the possibility of using software such as ChatGPT to reduce the workload of public servants.17

Footnotes

1. The Creator of ChatGPT Thinks AI Should Be Regulated | Time Magazine.

2. ISED Releases Companion to Proposed AI Law: Timelines, Guidelines, and Enforcement | McMillan LLP.

3. OPC launches investigation into ChatGPT | Office of the Privacy Commissioner of Canada.

4] Privacy Tech-Know blog: When worlds collide – the possibilities and limits of algorithmic fairness Part 1 and Part 2 | Office of the Privacy Commissioner of Canada.

5. AI in Financial Services: Joint OSFI and GRI Report Highlights Need for Safeguards and Risk Management as a Prelude to Enhanced OSFI Guidance | McMillan LLP.

6. Directive on Automated Decision-Making | Government of Canada.

7. Chatbots, deepfakes, and voice clones: AI deception for sale and Keep your AI claims in check | Federal Trade Commission.

8. Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence | U.S. Copyright Office.

9. The European Union and the Unites States of America strengthen cooperation on research in Artificial Intelligence and computing for the Public Good | European Commission.

10. US begins study of possible rules to regulate AI like ChatGPT | Reuters.

11. The European Union's Artificial Intelligence Act, explained | World Economic Forum.

12. Five considerations to guide the regulation of "General Purpose AI" in the EU's AI Act | AI Now Institute; EU to regulate 'general purpose' AI like ChatGPT | TechMonitor.

13. UK unveils world leading approach to innovation in first artificial intelligence white paper to turbocharge growth | Gov.UK.

14. Italy became the first Western country to ban ChatGPT| CNBC.

15. EDPB resolves dispute on transfers by Meta and creates task force on Chat GPT | EU EDPB

16. China releases rules for generative AI like ChatGPT after Alibaba, Baidu launch services | CNBC.

17. Japan not considering regulations on ChatGPT | The Japan Times.

The foregoing provides only an overview and does not constitute legal advice. Readers are cautioned against making any decisions based on this material alone. Rather, specific legal advice should be obtained.

© McMillan LLP 2021