On Monday, February 26, 2024, Minister of Justice Arif Virani introduced Bill C-63, which seeks to enact the Online Harms Act (the Act) and amend existing legislation in order to better address hate and harms to children in the quickly evolving online landscape.

The Act seeks to promote online safety1. The Act will be administered and enforced by a new Digital Safety Commission of Canada (the Commission), and operators of social media services will be subject to new monitoring, disclosure and record-keeping obligations, backed up by sizable monetary penalties.

What you need to know

  • The Act sets out several proactive and reactive obligations on the part of operators of social media services, including the duty to act responsibly (e.g., developing digital safety plans), in respect of seven types of harmful content.
  • The Commission will have jurisdiction to hear and investigate complaints regarding content on social media services that sexually victimizes a child or revictimizes a survivor, or involves intimate content communicated without consent, and order operators to render such content inaccessible.
  • If operators of social media services do not comply with the Act, they can face monetary penalties of up to 6% of gross global revenue or $10 million, whichever is greater.
  • Bill C-63 also amends:
    • the Criminal Code to create a hate crime offence and steep increases to the maximum sentences for hate propaganda offences;
    • the Canadian Human Rights Act to make online hate speech a discriminatory practice and empower the Canadian Human Rights Commission and Canadian Human Rights Tribunal to address complaints of online hate speech between individuals; and
    • the Act respecting the mandatory reporting of Internet child pornography by persons who provide an Internet service to clarify and broaden its scope and application to all types of internet service providers.

Social media services will be subject to a wide range of new duties

The Commission will administer and enforce the Act, with the support of the Digital Safety Ombudsperson and the Digital Safety Office of Canada.

The Act will apply to operators of social media services. The definition of social media service goes well beyond the typical players like Facebook, Instagram and Twitter/X. Rather, it is broadly defined as “a website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content”, with some requirements for number of users but even small-user sites could also be subject to the Act if there is a significant risk that harmful content is accessible on the service2. The definition also includes adult content and live-streaming services3.

The Act is concerned with seven types of “harmful content”, including:

  • intimate content communicated without consent (including content created using deepfake technology);
  • content that sexually victimizes a child or revictimizes a survivor;
  • content that induces a child to harm themselves;
  • content used to bully a child;
  • content that foments hatred;
  • content that incites violence; and
  • content that incites violent extremism or terrorism.

While most of these categories are straightforward, “content that foments hatred” is more controversial. Content will foment hatred if it “expresses detestation or vilification”, but not “solely because it expresses disdain or dislike or it discredits, humiliates, hurts or offends”. This definition is trying to protect freedom of expression online, but the threshold for when content engages the provisions of the Act is unclear, and will likely pose challenges in application.

Operators of social media services will be subject to a wide range of new duties and obligations, which include the following:

  • A duty to act responsibly4.
    • A prominent feature of this duty is that operators must prepare digital safety plans to submit to the Commission, containing an assessment of the risk to users, planned risk mitigation measures and report on complaints handled5.
  • A duty to make inaccessible certain harmful content (which sexually victimizes a child or revictimizes a survivor, or involves intimate content communicated without consent).
    • Operators only have 24 hours to make the content inaccessible after it is flagged by a user or after the social media operator identifies such content.
    • Operators are also obliged to preserve content rendered inaccessible for one year, and thereafter destroy it6.
  • A duty to protect children by integrating design features provided for by regulations (e.g., age-appropriate design).
  • A duty to keep records, including all information and data necessary to determine compliance with the Act.

People will be able to make complaints to the Commission regarding content on a social media service7. The Commission can hold a hearing for a complaint or any other issue of an operator's compliance with its duties and obligations8. However, there is a three-year limitation period9.

In the event of non-compliance, operators may be subject to compliance orders or may be required to, or can also enter into undertakings with the Commission. The Commission may also require operators to pay administrative monetary penalties, with a maximum of not more than 6% of gross global revenue or $10 million, whichever is greater10. If an operator contravenes the Commission's orders or undertakings, there can be higher monetary penalties of up to 8% of gross global revenue or $25 million, whichever is greater11.

Balancing rights against the duty to act responsibly

The duty to act responsibly is fairly expansive, and its implementation raises some questions regarding the scope of an operator's legal obligations, as well as users' expression and privacy rights.

At a basic level, operators must establish public guidelines and standards of conduct12, and develop and use tools and processes to flag harmful content13.

Operators are expected to implement measures to mitigate the risk of users being exposed to harmful content14. However, they are not required to proactively search for and identify harmful content15. Regulations may also be enacted in the future to require operators to “use technological means to prevent content that sexually victimizes a child or revictimizes a survivor from being uploaded to the service”16.

The Act also appears to be alive to concerns regarding users' freedom of expression and privacy, which are addressed in part with limits on the scope of operators' obligations. Any measures implemented by operators to mitigate online harm are not required to “unreasonably or disproportionately limit users' expression”17. In addition to stipulating that proactive searching for harmful content is not a requirement, the duties of operators also do not apply to any private messaging features18.

Nevertheless, without further guidance from the Commissioner, there is a risk that operators will find it burdensome to balance their legal obligations against the rights of individuals in a way that will satisfy the Commission. Operators will also need to consider privacy law requirements, which are likely to shape how operators meet their moderation and safety obligations under the Act.

Takeaways

The Online Harms Act has just been through first reading. There is going to need to be some further work to make it practicable, including (hopefully) consultation with major social media service platforms. However, businesses should consider the following in anticipation of coming changes:

  • Confirm whether business operations may be subject to this legislation, even if not currently operating as a typical social media platform. This might apply, for example, to apps and websites that focus on reviews or content creation.
  • Identify existing proactive/reactive content moderation processes, particularly as it concerns content involving children, and compare them to the measures proposed in the Act to identify any gaps.
  • Review current information and data retention and destruction policies in accordance with the expectations in the Act—particularly relating to complaints and removed content.
  • Map proposed legislative requirements against international obligations and practices to determine if these obligations will require different processes from those used globally.
  • Review user-facing policies and agreements (such as privacy policies or terms of service) for any required updates to reflect relevant practices.

Footnotes

1. Bill C-63, section 9(a), (b) (e) and (g).

2. Ibid. 2(1).

3. Ibid. 2(2).

4. Ibid. 54.

5. Ibid. 62; Digital safety plans do not, however, need to contain any personal information, and will not be required to provide an inventory of data, trade secrets, or confidential business information.

6. Ibid. 63, 67-68.

7. Ibid. 81.

8. Ibid. 88, 95.

9. Ibid. 118.

10. Ibid. 101.

11. Ibid. 120(2).

12. Ibid. 57.

13. Ibid. 58.

14. Ibid. 55-56; One qualification here is that the Act says this obligation must be shaped by considering various factors, including the size of the service and the technical/financial capacity of the operator.

15. Ibid. 7(1).

16. Ibid. 7(2).

17. Ibid. 55(3).

18. Ibid. 6(1).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.