Introduction

Various governments across the world have sought to regulate online content. The need for regulating online content has arisen due to a number of factors. These include posting content which depicts live terror attacks or contains hate speech, child pornography, cyber bullying material and fake news.

The following table highlights how various countries censor online content. Please note that this table is not exhaustive. This table is limited to the most relevant laws on takedowns related to illegal online content by various governments.

Country

Regulations/Laws

Comment

Jurisdictions with Limited Regulation

Observations

 

Many liberal democracies like Germany, Canada, and Australia have placed very limited and specific restrictions on online content. This approach seeks a balance between the freedom of speech and maintaining a safe online community. Most of these regulations (Germany, EU, and Australia) limit themselves to prohibiting what is already illegal (like terrorist content or hate speech) from being posted online.

 

Germany

Section 3(2) of the Network Enforcement Act, 2018 ("NetzDG") requires social networks to remove or block access to content that is "manifestly unlawful" within 24 (twenty-four) hours of receiving a complaint. Unlawful content is content which qualifies as an offence under the German Criminal Code, including hate speech and defamation.

Pros: The NetzDG and the One Hour Rule do not create any additional legal grounds for limiting freedom of speech and expression online. The One Hour Rule is an effective way to address radicalization of individuals, recruitment to terror camps and incitement of violence against any community, as there is a link between online radicalization and terrorism. Similarly, the AVMA limits 'abhorrent violent material' to content which contains criminal acts. Further, the AVMA is not vague as it relies on specific legal concepts to determine what exactly is 'abhorrent violent material'. In addition, the EOSA also targets particular instances of bullying and defines "cyber-bullying material" in a specific and limited way.

Cons: The NetzDG makes the online content platforms responsible for deciding what content is "manifestly unlawful". Further, smaller platforms may find it difficult to comply with the One Hour Rule, as they might lack the automated means to detect and remove such content expeditiously. Online content platforms like social media companies, internet service providers, email services, instant messaging applications and any website that allows the users to interact with one another may be liable under the AVMA. Moreover, the Canadian Supreme Court's judgment in Google v Equustek has been criticized for promoting worldwide censorship of online content.

 

European Union

The European Union Recommendation to effectively Tackle Illegal Content Online 2018 ("Recommendation") requires the online content platforms to proactively detect and remove terrorist and other illegal content, such as hate speech, incitement to violence, child sexual abuse material,  commercial scams and frauds, and breaches of intellectual property. However, since terrorist content is likely to spread faster than other forms of content, it should be removed within 1 (one) hour ("One Hour Rule") of such content being flagged by the law enforcement authorities to the online platforms.

Australia

The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 ("AVMA") makes it an offence to share abhorrent violent material online. Abhorrent violent conduct is conduct that would be regarded, in all circumstances, as offensive by reasonable persons. Further, it is immaterial whether the abhorrent violent material contains acts that were committed within or outside Australia. Abhorrent violent material includes content depicting terrorism, rape (including any attempt to rape), murder (including any attempt to murder), torture, and/or kidnapping. Online content platforms can be liable under the AVMA, if they fail to remove such content expeditiously.

Further, the Enhancing Online Safety Act, 2015 ("EOSA") prohibits the publication of cyber bullying material if it targets any child who is ordinarily resident in Australia. Cyber-bullying material includes any material that is targeted at a specific Australian child and may have the effect of seriously humiliating, harassing, intimidating or threatening a child.

Canada

Section 1 of the Canadian Charter states that the basic rights and freedoms of the individuals are not absolute. They can be limited to protect other rights or important national values, provided that such limitations are reasonable and can be justified in a free and democratic society.

Further, the Supreme Court of Canada  in R v Oakes (1986) said that the 'true freedom of an individual cannot come at the expense of dispensing with someone else's freedom'. Further, the Supreme Court in the case of Google v Equustek (2017) said that 'a country has the right to prevent the world's internet users from accessing information' while directing Google to delete some content from the Google USA and Google UK website.

This allows the Canadian court to direct online companies to block objectionable content across the globe.

Further the Canadian Prime Minister has signed the Christ Church Call to prohibit the distribution of terrorist and violent extremist content.

Jurisdictions with Moderate Regulation

Observations

 

Most provisions (such as that of Singapore, India, and Sri Lanka) that enable a government to block or remove online content, have been drafted in vague and broad terms. This has been done to ensure that such provisions capture any unforeseen instance that could lead to harmful consequences for a nation or for the online community. However, the breadth of these provisions could be used to suppress dissent and free speech.

 

India

Section 69A of the Information Technology Act, 2000 ("IT Act") empowers the central government to block public access to any content, which the central government considers necessary in the interest of the sovereignty and integrity of India, defence of India, security of the State, friendly relations with the foreign states, public order, or for preventing the incitement to the commission of a cognizable offence. The order to block public access to such content must be in writing.

 

 

 

 

 

 

 

 

 

 

 

 

Pros: The provisions of the IT Act, CCA, and the POFMA contain some limitations on the government's power to block content, such as the orders need to be in writing (under the IT Act) or that such blocking is necessary in order to preserve or will affect the stability of the nation (under the CCA). Similarly, under the POFMA, only "false statements of fact" are prohibited and not criticisms or opinions.

 

Cons: However, the method for determining the adverse effect of any online content on the stability of a nation has not been mentioned under the IT Act or the CCA. Furthermore, a wide legislation like the POFMA may entitle the government to control the public narrative, as it is difficult to distinguish between statements of opinion and fact.

Sri Lanka

Article 6 of the Computer Crimes Act No.24 of 2007 ("CCA") prohibits any person to perform any function that will result in danger or imminent danger to national security, national economy or public order.

Singapore

Section 7 of the Protection from Online Falsehoods and Manipulations Act 2019 ("POFMA") prohibits the spread of fake news by prohibiting any act of communicating a 'false statement of fact', whether in or outside Singapore, if the communication of such a false statement is likely to adversely affect Singapore's national security; public health, tranquility, safety, or finances; friendly relations with other states; any presidential or general election; public confidence in the government; and incite feelings of communal hatred or ill-will. The online content platforms will not be held liable for any 'false statements' posted by others on their platforms. However, the platforms can be required by the government to disable or correct the fake news available on their platform. Section 33 of the POFMA enables a minister to disable or block access to any content, which has previously been declared as containing falsehoods by the government under Part 3 or Part 4 of the POFMA. Any platform displaying fake news can be required to either correct that falsehood or display that the content has been the subject of a declaration (under POFMA) by the government, because the content is suspected to be false.

Jurisdictions with Strict Regulation

Observation

Countries like Russia and China have vague online censorship laws that could result in isolating the internet in each country.

Russia

The Russian Internet Restriction Bill 2012 ("IRB") empowers the Russian government to blacklist and block websites containing child pornography, information related to narcotics, information related to suicide, any information that a Russian court has prohibited from being distributed, content calling for illegal meetings, extremist content, hate speech or content violating the established order. The IRB also restricts content that can influence the population (especially the youth) of the country and weaken the cultural and spiritual values of the country.

Pros: The IRB seems to limit the scope of government censorship (when compared with China) and control over the Russian internet through well-defined categories as mentioned here.

Cons: However, broad provisions such as 'content that could weaken Russia's cultural values' or 'content that is not aligned with socialist core values' (in case of China) can provide both governments with excessive control over the internet in their respective countries. These provisions are indicative of an authoritarian state.

 

China

Under the Provisions on Management of Public Information Services and Group Information Services, any online content service, i.e. "public information and account service" and any private or public chat group in China, i.e. "internet group information service" should publish or post content that adheres to socialist core values, aligns and cultivates a positive network culture. In addition, the Chinese government has been known to censor content related to the Cultural Revolution, foreign television, pornography, religious material, etc. Furthermore, western social media platforms like Facebook, Google, and Twitter are not allowed to operate in China.

Jurisdiction with No Regulation

Observations

 

The United States of America currently has no laws on the take down of online content. However, previously a portion of Section 223 of the CDA, which sought to protect minors from "indecent" or "patently offensive" content, was struck down by the US Supreme Court as being unconstitutional. Perhaps, this free regulatory environment plays a significant role the USA's emergence as a leader in the field of technology and online content.

 

United States of America

Section 230 of the Communications Decency Act, 1996 ("CDA") states that it is the policy of the USA to ensure vigorous enforcement of federal criminal laws to deter and punish trafficking in obscenity, stalking and harassment by means of computer. Further, it states that no online platform will be liable for content posted by someone else if such content is removed or restricted for being viewed (by a user of the online platform or the online platform itself) as obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable (collectively, "Problem Content"), whether or not such material is constitutionally protected.

Pros: This provision protects the freedom of speech and expression, which is vital to sustain a liberal democracy. Further, it does not impose any compliance burden on online content platforms and enables them to remove any Problem Content even if the content is constitutionally protected. Moreover, section 230 of the CDA acts as a legal shield from liability for online content platforms in the USA.

Cons: Since the CDA empowers a user or an online platform to restrict or remove the Problem Content, it may inadvertently enable censorship through content publication platforms. Further, the term "otherwise objectionable" is broad enough to cover any kind of content that is not specified within section 230 of the CDA.

 

Conclusion

It is evident that some categories of content are prohibited in several countries, such as content related to terrorism, pornography, child sexual abuse, hate speech, etc. However, fewer jurisdictions prohibit content which threatens national sovereignty or security. The contours of provisions prohibiting content that challenges national sovereignty or security, are usually left broad so as to encompass any unforeseen situation that could have the stated effect. Most liberal democracies have prohibited content that could be categorised as hate speech.

In line with the Canadian Supreme Court's decision (as discussed above), it can be argued that certain limitations on individual rights and freedom are necessary to maintain a safe environment for all members of the online community. However, the standard for such limitations varies across countries. This may change as the internet rapidly replaces other forms of information consumption, and we may see some uniformity across jurisdictions regarding what is considered as illegal online content.

Originally published 9 January, 2020 .

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.