ARTICLE
13 March 2025

FTC Requests Input From Tech Platform Users About Speech

SM
Sheppard

Contributor

Businesses turn to Sheppard to deliver sophisticated counsel to help clients move ahead. With more than 1,200 lawyers located in 16 offices worldwide, our client-centered approach is grounded in nearly a century of building enduring relationships on trust and collaboration. Our broad and diversified practices serve global clients—from startups to Fortune 500 companies—at every stage of the business cycle, including high-stakes litigation, complex transactions, sophisticated financings and regulatory issues. With leading edge technologies and innovation behind our team, we pride ourselves on being a strategic partner to our clients.
The Federal Trade Commission recently requested public comment from users of tech platforms.
United States Technology
Liisa M. Thomas’s articles from Sheppard are most popular:
  • with readers working within the Consumer Industries and Transport industries
Sheppard are most popular:
  • within Insolvency/Bankruptcy/Re-Structuring topic(s)

The Federal Trade Commission recently requested public comment from users of tech platforms. In particular, the impact the platforms may have on user speech. Input is sought -by May 21- on the extent to which tech firms are engaging in potentially suppressing free speech.

Using terms like "censorship," "demonization," and "shadow banning," this request for public comment signals a new direction of the agency under Andrew Ferguson. The direction being taken reflects the concern expressed before the new administration: that tech platforms were using their roles to censor speech (see Murthy v. Biden).

The request is unlike those we had seen in the past from the FTC, insofar as it requests comment about the tech platforms not from the platforms themselves, but instead directly from users. As of this writing, the agency had received over 1,000 comments. Among other things, the agency has asked people to provide input on:

  1. Impact: Whether tech platforms banned users from the platform because of the content of their speech, or took other adverse actions and the extent to which those actions adversely impacted them. Relatedly, the request asks if people were given a "meaningful" way to challenge adverse decisions.
  2. Moderation: Whether there were moderation policies in place, and if the platform told people (even implicitly) that they could appeal the platforms' decisions. Also asked was whether the platforms used "opaque" or "unpredictable" processes to restrict access.
  3. Pressure: Interestingly, the request asks potential commenters to speculate on "factors [that] motivated platforms' decisions." Included in these might be measures that resulted in them getting banned from the platform. This includes suggestions like pressure from advertisers, state or local governments, or foreign governmental action.
  4. Competition: If the tech platforms were coordinating directly or through trade associations about policy and adverse actions.

Putting it into Practice: Private platforms' moderation policies date to the early days of the Internet, and the Digital Millennium Copyright Act and the Communications Decency Act. These policies typically indicate that content that violates the policy will be removed (the alternative -modifying content- would run the risk of the platform participating in the creation of the content, losing the shield of the DMCA or CDA). We anticipate comments from industry groups, in addition to the many already received from users themselves. The comment period closes May 21.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More