According to Ofcom, three in five teenagers have seen harmful content online in a four week period and children describe online content on suicide and self harm as being "prolific".
It has now published its final guidance on how to protect children under the Online Safety Act 2023. This follows consultation on draft guidance and builds on the rules already in place to protect all users from illegal harms. Service providers must also aim to prevent minors from encountering the most harmful content relating to suicide, self-harm, eating disorders and pornography. Online services must also act to protect children from misogynistic, violent, hateful or abusive material, online bullying and dangerous challenges.
Providers of online services who come within the scope of the children's duties now have to complete and record children's risk assessments by 24 July 2025. Subject to the Codes completing the Parliamentary process, from 25 July 2025, providers will need to take the safety measures set out in the Codes or use other effective measures to protect child users from content that is harmful to them. Ofcom emphasises that it is ready to take enforcement action if providers do not act promptly to address the risks to children on their services.
The guidance is complex and covers five volumes, although there is a helpful summary too.
Volume 4 is the key document which includes the 40 or so steps service providers should take to protect children. For some of the measures, whether the measure applies to a provider of a particular service will depend on (i) the level of risk of the service, (ii) whether it meets other specific risk criteria (such as having relevant functionalities), and/or (iii) the size of the service.
Ofcom references "large services" for both user-to-user and search measures. A service is large where its average user base is greater than seven million monthly active UK users.
Ofcom also references "multi-risk" for content harmful to children which refers to a service which the provider has assessed as being medium or high risk for at least two different kinds of content harmful to children. This, and any other criteria related to risk levels, depends on the outcome of a service's children's risk assessment.
For search services, some of the measures distinguish between two kinds of search service:
- general search services, which operate by means of an underlying index of URLs and enable users to search the internet by inputting search requests; and
- vertical search services, which enable users to search for specific topics, products or services offered by third-party operators.
The measures include:
- Safer feeds. Personalised recommendations are children's main pathway to encountering harmful content online. Any provider that operates a recommender system and poses a medium or high risk of harmful content must configure their algorithms to filter out harmful content from children's feeds.
- Effective age checks. The riskiest services must use highly effective age assurance to identify which users are children. This means they can protect them from harmful material, while preserving adults' rights to access legal content. That may involve preventing children from accessing the entire site or app, or only some parts or kinds of content. If services have minimum age requirements but are not using strong age checks, they must assume younger children are on their service and ensure they have an age-appropriate experience.
- Fast action. All sites and apps must have processes in place to review, assess and quickly tackle harmful content when they become aware of it.
- More choice and support for children. Sites and apps are required to give children more control over their online experience. This includes allowing them to indicate what content they don't like, to accept or decline group chat invitations, to block and mute accounts and to disable comments on their own posts. There must be supportive information for children who may have encountered, or have searched for, harmful content.
- Easier reporting and complaints. Children must find it straightforward to report content or complain, and providers should respond with appropriate action. Terms of service must be clear, so children can understand them.
- Strong governance. All services must have a named person accountable for children's safety, and a senior body should annually review the management of risk to children.
Next steps
Ofcom has said that it will be consulting on additional measures, including to:
- ban the accounts of people found to have shared child sexual abuse material (CSAM);
- introduce crisis response protocols for emergency events;
- use hash matching to prevent the sharing of non-consensual intimate imagery and terrorist content;
- tackle illegal harms, including CSAM, through using AI;
- use highly effective age assurance to protect children from grooming; and
- set out the evidence surrounding livestreaming and make proposals to reduce these risks.
The media has reported that the UK government is coming under pressure from the US government to water down the protections in the Online Safety Act as part of a UK-US trade deal. However, on 24 April, the Times reported that Peter Kyle, the technology secretary, had said that he was not afraid to encourage Ofcom to use their powers to fine technology companies over breaches. It also reported that the government is considering further measures, such as a social media curfew for children (a ban on mobile phones for the under-16s has also been discussed). Other organisations such as the Online Safety Network say that the Online Safety Act needs to be amended to effectively protect children.
While the debates rage on, tech companies must ensure that they have carried out the necessary risk assessments and that they are ready to introduce the relevant measures to protect children.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.