In the ever-evolving digital landscape, where the complexities of internet use have never been greater, the introduction of the Online Safety Act marks a milestone in the regulation of online conduct and the protection of users in the United Kingdom (UK). This article aims to provide a overview of the Online Safety Act, its key provisions, and the implications for affected businesses.
What is the Online Safety Act?
The UK government's Online Safety Act, which received Royal Assent on October 26, puts rules designed to make the UK "the safest place in the world to be online" into law. The legislation is a proactive response to escalating concerns surrounding illegal and harmful online content, cyberbullying, digital privacy, and the dissemination of misinformation. The Act seeks to bolster the safety of internet users through the imposition of more stringent guidelines and regulations on specific online platforms, with non-compliance carrying the potential for significant fines.
Who is in scope?
The Online Safety Act imposes obligations on the following online service providers:
- User-to-user services: internet services that allow users to upload and access user-generated content. This includes social media networks, e-commerce marketplaces, dating applications, discussion forums, and online gaming services where user-generated content is prevalent.
- Search services: providers enabling users to navigate and search content across the internet, including traditional search engines and voice-activated devices with search capabilities.
- Other online service providers: providers of internet services where pornographic content (not user-generated) is published or displayed.
The Online Safety Act extends its reach beyond major 'Big Tech' platforms to encompass businesses of all sizes. All affected businesses, including both well-resourced companies and small 'micro-businesses', must adhere to the rules. The rules also apply to individuals operating online services. Businesses will be classified as either "Category 1" or "Category 2" services, with Category 1 facing more stringent obligations.
Also, the Act applies not only to UK-based services but also to international providers serving or having a substantial user base in the UK. It's important to note that a user may be an individual or an entity, and that the registration status, i.e., whether they have created an account or not on the online platform, is irrelevant.
Key Provisions of the Act
While the specific responsibilities may vary depending on the nature of the service, most businesses under the Online Safety Act are required to:
- Assess the risk of harm from illegal content
- Evaluate the specific risk of harm to children from harmful content, especially if children are likely to use the service
- Implement effective measures to manage and mitigate identified risks, following published Codes of Practice
- Clearly articulate user protection measures in the T&Cs
- Facilitate easy reporting of illegal content and content harmful to children by users
- Ensure easily accessible channels for user complaints, especially when users perceive their content removal or account blocking to be unjust
- Consider the importance of preserving freedom of expression and the right to privacy when implementing safety measures
How will these new rules be enforced?
The UK's communications watchdog, Office of Communications (Ofcom), is at the helm of enforcing the Act. With the law now in effect, Ofcom will begin outlining codes of practice and launching consultations to guide platforms toward compliance.
Most of the rules are yet to come into force, as Ofcom is adopting a phased approach with the first new duties expected to take effect at the end of 2024. However, businesses should start preparing now. Quick action is expected once the new legal obligations kick in, requiring a strong and effective operational response.
Non-compliance could lead to:
- Hefty fines: businesses could face penalties of up to £18 million or 10% of their global annual turnover; and
- Criminal Liability: in certain cases, senior executives and managers may face criminal charges.
In this scenario, platforms providing services to users in the UK and EU face the challenge of navigating both the UK's Act and the EU's Digital Services Act (DSA). These legislations have distinct requirements, particularly in content moderation, where the Online Safety Act stands out for its proactive approach, setting it apart from the DSA.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.