The OSA and new duties for service providers
Widely hailed as legislation which would make the UK the safest place to be online, the much-anticipated Online Safety Act (OSA) came into force in October last year. This piece of legislation seeks to change the regulatory landscape by placing duties on social media companies and search services to protect children and adults online.
The OSA has significantly shifted the liability regime for platforms that host content online. Under the old regime, platforms were not required to actively monitor for unlawful content, and would not be liable where they acted expeditiously to remove or disable access to unlawful content once they received notice of it. Practically, there are issues with this type of content moderation. As platforms grow, it can be difficult to identify and remove all unlawful content, and some systems are heavily reliant on reporting from other users to identify prohibited content.
Following the introduction of the Act, providers who are in scope must now adopt an active role in monitoring the content they host online. For the platforms involved, compliance is likely to require significant resources and attention over the coming years.
The duties placed on service providers are broad, and the OSA places the most onerous requirements on platforms it considers to be the highest risk. All platforms must introduce measures to tackle illegal activity and to swiftly remove illegal content when it does appear. For platforms likely to be accessed by children, there is also a duty to prevent them from accessing harmful content.
Whilst we know what the OSA says, we don't yet know how it will be enforced in practice. Full implementation is still some time away. Ofcom – the designated regulator – has responsibility for giving the legislation its detail and shape. This involves the development of draft codes of practice and seeking feedback from stakeholders via consultations. Whilst the process is important to ensure that the legislation comes into force in a transparent and fair way, the duties have not yet taken effect almost 10 months on. This is expected to start from the end of 2024 when the first codes of practice will be submitted to the Secretary of State and once approved, laid before Parliament.
Once in force, Ofcom will also have the power to issue penalties of up to £18m or 10% of qualifying worldwide revenue, if higher, if platforms fail to comply with their obligations under the Act. Senior managers of in-scope organisations can also be criminally liable for a company's failure to comply with Ofcom's information requests.
The legislation and the speed of adoption of the OSA has come under scrutiny in the wake of the recent public disorder in the UK. It has been suggested these were sparked, in part, by false information spreading rapidly via social media platforms.
Under the OSA, racially or religiously aggravated public order offences and inciting violence are types of so-called 'priority' illegal content. Once the obligations take effect, all user-to-user service providers will have a duty to:
- take or use proportionate measures relating to the design or operation of the service to prevent individuals from encountering priority illegal content; and;
- operate the service using proportionate systems and processes designed to minimise the length of time priority illegal content is present; and once the provider becomes aware of any illegal content, to swiftly take it down.
Whilst there is currently no obligation on platforms to remove this illegal content until the regime guidance is clarified, in an open letter Ofcom has encouraged platforms to act now, noting that "there is no need to wait to make your sites and apps safer for users". A number of platforms will already have terms of use which prohibit the type of racist and violent content of concern, but the speed at which content can spread may make it difficult for platforms to deal with this effectively. The proactive monitoring prescribed by the Act will require providers to go a lot further to actively tackle illegal content.
What criminal consequences could individual users face?
When it comes to online users of social media platforms, malicious communications and inciting religious or racial hatred are criminal offences, and several individuals have been prosecuted on this basis recently, as a result of their activities online. The sentences are substantial; custodial sentences of 38 and 20 months' imprisonment have been given to individuals convicted of publishing written material intended to stir up racial hatred, under s19(1) of the Public Order Act 1986.
The OSA has also introduced some new criminal offences for individuals, which took effect from 31 January this year. The offences include:
- encouraging or assisting serious self-harm;
- cyberflashing;
- sending false information intended to cause non-trivial harm (known as the 'false communications' offence);
- sending threatening communications;
- intimate image abuse; and
- epilepsy trolling.
Since coming into force, there have already been convictions under the Act, in particular for cyberflashing and threatening communications offences. Perhaps unsurprisingly, threatening communications has also been used as the basis for prosecution in recent days. One conviction has resulted in a sentence of 15 months in prison, following a hate-related message posted by an individual on Facebook earlier this month.
Given that the circulation of misinformation is said to have contributed to the recent public disorder, would any of these new offences tackle this issue? The false communications offence may apply in narrow circumstances. Under section 179 of the Act, it is an offence for a person to send a message containing information they know to be false, intending the message or the information in it to cause "non-trivial psychological or physical harm to a likely audience". A likely audience is someone who could reasonably be foreseen to encounter the message or its content.
It is reported that an individual has also been sentenced to 3 months in prison following a conviction under the false communications offence, as a result of posting a video in which he falsely claimed that he was being chased by far-right rioters. Whilst it is notable that this new offence is already being used successfully by the police, it seems unlikely that it will have a wide application. A key limitation is that the offence requires the individual to know that the content is false, and therefore a communication based on a misunderstanding would not be sufficient.
Otherwise, the only references to misinformation in the Act relate to the creation of a committee on misinformation to advise Ofcom and changes to Ofcom's media literacy policy. Some have suggested that, in the wake of the public disorder, the Act is not currently fit for purpose. In response to this, Nick Thomas-Symonds, Paymaster General, indicated that the Government was prepared to quickly review and consider the OSA in light of criticisms, and stood "ready to make changes if necessary". However, more recent comments from spokespersons of Kier Starmer suggest that the Act is not currently under active review. Although he has acknowledged that the Government will need to look more broadly at social media after the disorder, for the time being, it seems that the focus will be on ensuring that the Act is implemented quickly and effectively.
Service providers who are in scope will need to wait and see how Ofcom shapes this legislation in the coming months. For the platforms involved, they will need to be ready to adapt to a new era of content regulation.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.