ARTICLE
16 October 2024

Mod Squad: Legal Aspects Of Social Content Moderation (Video)

G
Gamma Law

Contributor

Gamma Law is a specialty law firm providing premium support to select clients in cutting-edge media/tech industry sectors. We have deep expertise in video games and esports, VR/AR/XR, digital media and entertainment, cryptocurrencies and blockchain. Our clients range from founders of emerging businesses to multinational enterprises.
The social media landscape is in flux. The dizzying rise of AI and the burgeoning metaverse have ignited a global debate around online safety and content moderation.
United States Media, Telecoms, IT, Entertainment

The social media landscape is in flux. The dizzying rise of AI and the burgeoning metaverse have ignited a global debate around online safety and content moderation. At the heart of this discussion, in the United States at least, lies a critical question: How can we ensure a safe and healthy online environment without stifling free speech? The startling ease with which special interests create and disseminate disinformation and deepfakes that push First Amendment protections to their limits — and possibly beyond — complicates the discussion.

Lawmakers across the country are grappling with how to address the challenges posed by online content, from the spread of misinformation to hate speech and copyright infringement.

A Global Quest for Transparency

The proliferation of "information disorder syndrome" — creating or sharing misinformation, disinformation, fake news, and hate speech, with or without malicious intent — has become a major concern for governments worldwide. In response, many countries are turning to transparency as a potential solution. The logic is simple: by shedding light on their content moderation practices, social media companies can be incentivized to improve them.

For example, California passed Assembly Bill 587 in 2022, requiring social media platforms with over a million daily active users to publicly disclose their content moderation policies. This includes how they manage reports of hate speech, misinformation, libel, and user appeals. Similarly, the European Union's landmark Digital Services Act (DSA) mandates transparency from social media giants. The DSA requires platforms to disclose their content moderation practices, the criteria used to moderate content, and how they deal with user complaints.

While transparency seems like a straightforward approach, the devil — as always — lies in the details. An experienced Web3 attorney can advise platforms on exactly what and how much information needs to be disclosed and how this information will be used.

A World of Diverse Approaches

The global response to online content moderation is far from uniform. While the US and EU emphasize transparency, other countries have taken a more direct approach.

Germany, Austria, and several others have enacted laws specifically prohibiting hate speech and violent content on social media platforms. These laws often require platforms to remove such content within a tight timeframe, typically 24 hours, or face hefty fines. As might be expected, this approach has sparked concerns about potential censorship and the chilling effect it might have on legitimate free speech and political commentary.

Beyond Europe, several countries are actively shaping their regulatory landscapes. Venezuela, Australia, Russia, India, Kenya, the Philippines, and Malaysia have implemented or proposed laws similar to the German model.

Indonesia's proposed legislation seeks to compel social media platforms to operate under local jurisdiction for content and user data. This move has significant implications for data privacy and internet governance, potentially disrupting the free flow of information.

Ethiopia's approach is particularly aggressive. Its Computer Crime Proclamation and Hate Speech and Disinformation Prevention and Suppression Proclamation of 2020 require platforms to function as digital police forces, mandating the removal of "disinformation" and hate speech within 24 hours. Such compressed timeframes raise serious concerns about the accuracy and fairness of content moderation decisions.

The proposed Mauritian law represents an even more extreme example. It would empower the government to seize and intercept traffic to social media platforms — which seems like a blatant disregard for net neutrality and the free flow of information.

The US Walks a Tightrope

Despite First Amendment protections, there are growing calls in the US for the enactment of broad content moderation laws. While the potential harms arising from unfettered online discourse cannot be ignored, the spread of misinformation can undermine trust in institutions and democratic processes. Hate speech can incite violence and marginalize vulnerable groups. Copyright infringement on these platforms can also damage creativity and stifle innovation.

Traditionally, US law has afforded online services significant leeway in content moderation thanks to Section 230 of the Communications Decency Act. This legal shield allows platforms to host, display, remove, or block content without facing liability for its origin. While Section 230 has fostered online innovation, concerns about its scope have emerged. Shielding platforms from liability for user-generated content allows them to focus on growth without the burden of constant content moderation. However, concerns have been raised about the potential for this shield to be abused. Critics argue that Section 230 allows platforms to become de facto publishers, wielding immense power over online discourse with limited accountability.



As concerns about online harm mount, some states have taken matters into their own hands. In 2021, Texas and California passed laws aimed at regulating content moderation practices by large social media companies. These laws were spurred by the belief that platforms were censoring certain viewpoints. However, the laws often contained loopholes that exempted smaller, often politically extreme platforms, raising accusations of bias.

These state laws also faced legal challenges, with critics arguing they violated the First Amendment's prohibition on government censorship. Tech giants like Facebook and X (formerly Twitter) argued that the laws would stifle their ability to curate a safe and healthy online environment.

Organizations employing content marketing, digital media, or AI should prepare for the reckoning by consulting with an attorney specializing in Web3 law. Appropriate legal counsel can ensure businesses implement monitoring and processes that comply with current and coming changes in content moderation laws worldwide. Lawyers in this space can also help build robust protections to prevent unauthorized use of intellectual property and shield companies from liability stemming from the content they produce, disseminate, or host.

Opposition to Moderation

Jurisdictions attempting to implement content moderation schemes face significant challenges and opposition from various stakeholders. One of the primary arguments against content moderation laws is that they may lead to an increase in government power over the internet under the guise of national security. History has shown that governments have sometimes engaged in questionable activities to censor online speech, raising fears about the potential abuse of content moderation laws.

Critics also point out that content moderation laws can have unintended consequences. For example, such laws may spill over to e-commerce platforms, forcing stores to include items they don't agree with or remove legitimate product reviews, potentially infringing on their rights as private businesses.

Finding Common Ground: A Web3-Centric Approach

The ideal solution lies somewhere between the US' laissez-faire approach and the more restrictive models employed by some other countries. Here are some key considerations for crafting an effective US framework for the Web3 era:

  • Content Categorization: Distinct categories of content may require different moderation approaches. For instance, misinformation may be best addressed through educational initiatives and fact-checking mechanisms, while violent content might necessitate stricter takedown policies.
  • Transparency and Accountability: Transparency is crucial but shouldn't be the end goal. Social media companies should be held accountable for their content moderation decisions, and users should have clear avenues to appeal content removals or suspensions.
  • AI and Algorithmic Bias: As communicators increase their dependence on AI to generate and evaluate content, potential algorithmic biases must be eliminated. Regular audits and human oversight mechanisms are essential to ensure fairness and prevent discriminatory outcomes.
  • Global Harmonization: Given the borderless nature of the internet, a truly effective solution requires international cooperation. This, of course, is easier said than done. The US and its allies may at least be able to develop harmonized approaches to content moderation that respect free speech and user empowerment while stifling calls for violence, sedition, and attacks on minorities.
  • Decentralization: The rise of Web3 technologies, such as blockchain and decentralized social networks, offers a potential[1] path forward. Decentralized platforms can incentivize responsible user behavior and give users more control over their data.

Web3 technology and metaverse companies should consult with a lawyer or law firm specializing in the metaverse before launching their platform; a lack of foresight and judgment in this area can lead to reputational and financial damage. Apart from advising on the full legal implications of content moderation law, an attorney can also help draft terms and conditions, content policy, and other legal documents for the platform, which can help reduce legal liability on account of violation of content laws.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Find out more and explore further thought leadership around Entertainment Law, Media Law and Telecoms Law

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More