Julian Hayes and Andrew Watson discuss the controversial UK proposals for tackling online harms and the potential threat they may pose to the freedom of speech.
The rise of the internet in the first decades of the 21st century marks an epoch-defining moment, ushering in the "era of information abundance" and bringing both unparalleled benefits and significant drawbacks with which we are only beginning to grapple. The UK government's Online Harms White Paper, published in the spring, is part of this process. Consultation on the paper, which aims to make the UK the world's safest place to go online, closed in July 2019 but given the fundamental objections raised in the published responses, draft legislation seems unlikely any time soon.
The list of online harms in the government's sights are a roll call of modern social ills, from widely understood scourges such as terrorist content, child sexual exploitation and cyberstalking to less clearly defined phenomena like disinformation, trolling and intimidation. The list is deliberately open-ended to ensure the eventual legislation keeps pace with changing technology and online habits.
Until now, tackling such harms has relied on a patchwork of criminal law and regulation aimed at specific issues coupled with voluntary initiatives. However, following high-profile incidents both in the UK and abroad, the government concluded firmer action is necessary. It has proposed a statutory duty of care which would be imposed on entities as diverse as tech giants and social media companies, public discussion forums, cloud hosting providers and even retailers inviting online product reviews. Those affected would be required to take reasonable steps to keep users safe, and prevent others coming to harm as a direct consequence of activity on their services. This duty would be underpinned by regulatory codes issued by a dedicated regulator and, to facilitate enforcement action, overseas entities could be required to nominate a UK representative. Breaches of the duty would potentially lead to fines, the blocking of non-compliant websites and, in the worst cases, the imposition of civil and even criminal responsibility on senior company managers.
The UK is not alone in seeking new legislative tools to protect people online - policymakers in France, Germany and Australia have all introduced national laws for that purpose and, in a recent leak, it emerged that the European Commission ("the Commission") is considering a so-called "Digital Services Act", to be introduced in 2020 and enforced by a European supra-regulator, which would tackle issues such as online hate speech and disinformation. Already proving highly controversial is the Commission's suggestion of incentivising proactive measures such as the introduction of automated algorithmic filtering – effectively monitoring.
The UK proposals go significantly further than the Commission's, effectively stripping away the "safe harbour" which platform providers currently enjoy under EU law and requiring active steps to remove (and in some instances prevent the uploading of) harmful content if liability for third party content is to be avoided. Some critics have questioned whether such a step is justified when the behaviour at which the proposals – for example, cyber-bullying – are aimed is a symptom of a deeper societal ill rather than something which the platform provider has caused. Others have suggested that, when much of what we view on social media is dictated by unseen algorithms tracking our behaviour, an alternative approach to tackling some online harms might be to restrict the use of such hidden processing by big tech organisations.
The key objection raised to the UK government's proposals, however, is the potential threat they may pose to freedom of speech. That threat arises from the nature of the duty of care itself and from the vague definition and open-ended list of the harms they seek to address.
As the Internet Association, comprising the world's leading search and social media companies, suggests, "duty of care" carries with it a specific legal meaning which might work for obvious risks of, say, physical injury, but does not easily fit with the ambiguity of many online harm terms. For example, at what point does freedom of expression about misguided if genuinely held anti-vaccine views become "disinformation", and how is the platform provider with the duty of care to decide? In such circumstances, there is concern that fear of regulatory action could lead to disproportionate self-regulation by organisations, borne out of excessive caution, and rigid adherence to the new regulator's codes of practice which may not necessarily be appropriate.
The White Paper's proposals themselves acknowledge that many of the harms at which they are aimed are vague, for example, "intimidation" and "coercive behaviour". Putting aside the principle against imposing regulatory and criminal liability in respect of ill-defined behaviour, companies – particularly start-ups and small and medium-sized enterprises – would be likely to err on the side of caution and delete material in borderline cases. In practice, this would usher in "upload filtering" to prevent the publication of material arbitrarily deemed harmful. This may ultimately have a chilling effect on public discourse and drive people to the dark web where the potential for exposure to extremes is greater still.
The non-exhaustive list of online harms at which the proposals are aimed is no doubt a well-intentioned attempt at "future-proofing" the legislation. However, the lack of a clear remit for the new regulator risks attracting intense media and political pressure to take action in the event of future scandals which only peripherally involve the online sphere. Further, the open-ended nature of the proposed regulatory remit risks encouraging more repressive regimes around the world to follow suit, justifying crackdowns on legitimate political dissent by reference to the UK regime.
As well as principled objections to the proposals, commentators have warned of the financial risk they pose to the UK's digital sector, said to contribute £45 billion to the UK's GDP. Even though the proposals maintain that the regulator would take account of a company's size and "reach" when considering compliance, the cost of introducing measures such as upload filtering, particularly for start-ups and SMEs, may be prohibitive. Some companies may simply decide that the regulatory burden is too onerous for them to offer their services to UK customers.
The pervasiveness of the internet is bringing with it sometimes bewildering social and economic development. At the same time, this has amplified many familiar problems, leading to calls for tighter restrictions. The government's White Paper is a recognition of the irreversible societal changes which are taking place and, in effect, a first step towards establishing the acceptable norms of the future. However, when considering responses to the consultation and formulating the parameters which will determine our relationship with technology going forward, legislators should be careful to strike a balance between the need for regulation and the right to free speech.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.