ARTICLE
20 January 2025

Content Moderation And The UK's Online Safety Act (OSA)

LS
Lewis Silkin

Contributor

We have two things at our core: people – both ours and yours - and a focus on creativity, technology and innovation. Whether you are a fast growth start up or a large multinational business, we help you realise the potential in your people and navigate your strategic HR and legal issues, both nationally and internationally. Our award-winning employment team is one of the largest in the UK, with dedicated specialists in all areas of employment law and a track record of leading precedent setting cases on issues of the day. The team’s breadth of expertise is unrivalled and includes HR consultants as well as experts across specialisms including employment, immigration, data, tax and reward, health and safety, reputation management, dispute resolution, corporate and workplace environment.
The UK's Online Safety Act (OSA) mandates tech companies to implement systems to safeguard users from harmful and illegal online content, focusing on proactive risk assessments, proportionate safety measures, and compliance with codes of practice by March 2025.
United Kingdom Employment and HR

We live in a time of increased public discourse and awareness in relation to online harms and digital regulation, including under the EU's Digital Services Act and the UK's Online Safety Act (OSA).

These flagship laws impose new legal duties on tech companies and service providers, with the OSA aimed at making "the UK the safest place in the world to be online".

Against this backdrop, the voices of tech companies and politicians advocating for the protection of freedom of expression and resistance of online "censorship" have grown louder, particularly following the 2024 US election.

Views have become more polarised, and the gap between the demands of the tech sector and the recent positions of (most) legislators/ politicians has widened. In fact, last week Meta announced a series of changes intended to "dramatically reduce the amount of censorship" and "restore free expression" on its platforms (Facebook, Instagram and Threads).

The changes include:

  • replacing independent factcheckers with a system of "community notes" (similar to that used on X), which allows users who participate in the program to write fact-checks for posts;
  • simplifying content policies and removing restrictions on topics such as gender and immigration;
  • taking a new approach to policy enforcement by: (i) changing the focus on filters so that instead of scanning for any policy violation, they are targeted specifically to tackle illegal and higher severity violations (for lower-severity violations, Meta will rely on user-reporting); and (ii) tuning filers to require "higher confidence" before taking down content; and
  • moving content moderation teams from California to Texas.

It's a great time to refresh ourselves on the requirements of the OSA in relation to content moderation and provide some general thoughts on compliance.

What is content moderation?

As we reported here, the ICO produced guidance in March 2024 on content moderation as part of its ongoing collaboration with Ofcom on data protection and online safety technologies. The guidance uses the term 'content moderation' to describe:

  • the analysis of user-generated content to assess whether it meets certain standards; and
  • any action a service takes as a result of this analysis (for example, removing the content or banning a user from accessing the service).

What is the Online Safety Act?

The OSA is a piece of UK legislation that was passed in 2023 with the aim of protecting children and adults online. It imposes a range of new duties on user-to-user services (including social media companies) and search services, requiring them to put in place systems and processes to improve user safety. In particular, it aims to protect children from harmful content and to limit illegal online content and activity.

The focus of the OSA is not on Ofcom moderating individual pieces of content, but on providers pro-actively assessing risks of harm to their users and putting in place systems and processes to keep them safer online. Proportionality is central to the OSA, and each provider's chosen approach will need to reflect its characteristics and the risks it faces.

The regime is being implemented in phases and relies heavily on secondary legislation and codes of practice. Importantly, following Ofcom's publication of its illegal harms statement (accessible here), providers are now legally required to protect their users from illegal harm, and they have until 16 March 2025 to assess the risk of illegal harms on their services, and from 17 March 2025 (subject to the codes completing the parliamentary process) they will need to take the safety measures set out in the codes of practice (or use other effective measures) to protect users from those illegal harms.

It's worth noting that organisations based outside the UK are not shielded; the OSA has extraterritorial effect in its application to user-to-user services that are "regulated" in the sense they have "links" with the UK, e.g. because the service targets the UK or has a significant number of UK users (therefore capturing international tech organisations).

What obligations does the Online Safety Act impose on tech companies in relation to content moderation?

Illegal/harmful content

Under the OSA, all social media platforms (as providers of regulated user-to-user services) are required to actively monitor and remove illegal content and content that is harmful to children. This involves not only eradicating existing illegal/harmful content, but also preventing it in the first place. See below a summary of the relevant duties:

A duty, in relation to a service, to take proportionate measures relating to the design or operation of the service to:
Section 10(2) (Illegal Harms)
  • prevent individuals from encountering priority illegal content by means of the service;
  • effectively mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence*, as identified in the most recent illegal content risk assessment of the service, and
  • effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service.
Section 12(2)
(Protection of Children)
  • mitigate and manage the risks of harm to children in different age groups, as identified in the most recent children's risk assessment of the service; and
  • mitigate the impact of harm to children in different age groups presented by content that is harmful to children present on the service.
A duty to operate a service using proportionate systems and processes designed to:
Section 10(3)
(Illegal Harms)
  • minimise the length of time for which any priority illegal content is present; and
  • where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content
Section 12(3)
(Protection of Children)
  • prevent children of any age from encountering, by means of the service, primary priority content** that is harmful to children; and
  • protect children in age groups judged to be at risk of harm from other content that is harmful to children (or from a particular kind of such content) from encountering it by means of the service.

*A priority offence is a terrorism offence, a child sexual abuse and exploitation offence, and any other offence listed in Schedule 7 of the OSA.
**Primary priority content is pornographic content (with some limited exceptions) and content which encourages, promotes or provides instructions for suicide, self-injury or for an eating disorder

The OSA does not prescribe measures that a company must take to comply with these duties but provides a list of measures that a provider may be required to take or use "if proportionate to do so". These include "content moderation, include taking down content" as well as "functionalities allowing users to control the content they encounter" and "design of functionalities, algorithms and other features".

Whether or not, and to what extent, a provider's measures or systems/processes are "proportionate" is of course context dependent, and will require analysis of the size, complexity, and scope of the company and its services, the level of risk, as well as the resources required to implement it effectively. The largest and best known platforms will inevitably be held to the most onerous interpretation of the duties.

It is worth noting that the supplementary Illegal Harms Content Code of Practice (published in December 2024, and accessible here) specifically recommends "content moderation" as a measure to be taken with respect to all services and makes additional recommendations e.g., that the provider should set and record performance targets for its content moderation function, covering at least the time period for taking relevant content moderation action and the accuracy of decision-making, and that the provider should provide sufficient resource to its content moderation function to give effect to its internal content policies. While service providers are not obliged to follow the codes under the OSA, Ofcom has stated that providers that do will be treated as compliant with the relevant duties.

Misinformation

The OSA does not require tech companies to moderate content for misinformation (being false or inaccurate information); in fact, it does not directly regulate misinformation at all. However, misinformation might be indirectly caught by:

  • the duties imposed on a service provider with respect to illegal content and content that is harmful to children (referred to above);
  • the requirement for service providers to uphold their terms of service (if misinformation is prohibited in their terms of service, they will have to remove it); or
  • the new "false communications offence" which prohibits the intentional sending of false information that could cause 'non-trivial psychological' or physical harm to users online, and which is predominantly aimed at internet trolls.

Final thoughts

Given the OSA's limited remit in respect of misinformation, the key question for a tech company to consider from a compliance perspective is whether it complies with its duties regarding illegal harms and the protection of children. These duties can be satisfied with or without content moderation. The fact that the OSA does not prescribe required steps for compliance, and the recommendations set out in the Code of Practices are not mandatory, means that the devil will be in the detail and assessments of the measures taken by a service provider will be made on a case-by-case basis.

The regulatory rhetoric has been robust. In October 2024, the CEO of Ofcom gave the following warning: "2025 will be a pivotal year in creating a safer life online...Our expectations are going to be high, and we'll be coming down hard on those who fall short".

It will be interesting to see how Ofcom's assessments are carried out in practice and to explore any instances of non-compliance (including the penalties issued, which are serious and include criminal liability in certain circumstances).

It's also worth noting that following the rioting in the UK during the summer of 2024, which was seen to be largely a response to misinformation spread via social media, there has been discussion in the media about whether the OSA could be amended to be more effective to tackle misinformation. Ofcom has said it will consult in the spring of 2025 on crisis response protocols for emergency events, such as the summer 2024 riots. However, things move quickly in the digital world, and these discussions started before the era of Trump 2.0.

Also, these discussions pre-dated the change of mood-music coming from the UK government, which has recently seen Rachel Reeves, the Chancellor of the Exchequer, meet with the major UK regulators to ensure they are more focused on growth than on applying these regulations in an overly zealous way. These are the very rules and regulations that the Labour party has been instrumental in developing and shaping over the past several years, even before they became the party of government.... so, this is all becoming a bit 'meta'.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More