Since the release of OpenAI's ChatGPT, artificial intelligence, specifically the rise of large language models (LLMs), has become a topic of discussion for media, pop culture, and businesses alike. As these entities have begun to grapple with the new technology, Federal regulators have also begun to grapple with the new technology. The FCC and FTC have begun to take somewhat divergent paths in their approaches toward the new technology.

FCC

The FCC has taken a collaborative approach to artificial intelligence regulation. The agency's Technological Advisory Council has been grappling with AI implementation for years, planning how the agency can best use the technology in its operations. The Advisory Council has recently provided a set of recommendations for the Commission, including setting up an AI task force and studying spectrum usage to better understand how AI can be implemented. This summer, the FCC also cohosted a workshop with the National Science Foundation on the opportunities and challenges of AI for communications networks and consumers. The workshop focused on how AI can help with the agency's regulatory challenges.

In August, the FCC approved a Notice of Inquiry to help determine how technologies such as artificial intelligence can assist in managing spectrum usage. The inquiry notes "the burgeoning growth of machine learning (ML) and artificial intelligence (AI) offer revolutionary insights into large and complex datasets" that can be used to improve spectrum management. The Commission hopes that this understanding will facilitate AI innovation, such as facilitating greater spectrum use, identifying additional spectrum-sharing methods, and enabling spectrum coexistence among different users and services. Past Commission comments suggest those innovations may also include spectrum self-healing.

FCC Chairwoman Rosenworcel has expressed optimism about the future of AI: "From my perch as the head of our Nation's expert agency on communications, I can't help but be an optimist about the future of AI . . . machine learning can provide insights that help better understand network usage, support greater spectrum efficiency, and improve resiliency by making it possible to heal networks on their own." However, other FCC commissioners have expressed caution about the agency taking significant regulatory action. Commissioner Brendan Carr suggested Congress should lead the AI regulatory discussion, while Commissioner Nathan Simington expressed concerns that regulation could cause more harm than good for the upstart industry.

FTC

The FTC has taken a more adversarial tact in response to the rise of AI. Led by Chairwoman Lina Khan, the agency has been outspoken in its concerns about the fledgling industry. Those concerns have recently resulted in several enforcement actions and public statements.

Over the past year, the FTC has raised concerns about AI companies' compliance with consumer protection and antitrust laws, particularly AI's role in creating or disseminating illegal or deceptive content. In June 2022 following a request from Congress, the FTC released a report on the role AI could take to address online harms. The report warned of AI's limitations and flaws, providing a series of considerations for using the technology to monitor and mitigate bad actors online. Those considerations include sustained human oversight of AI tools, a need for transparency and accountability in the programs, and ensured compliance with legal and regulatory requirements, especially if AI is used to censor or takedown speech.

This year, the agency has published blog posts on AI's production of deceptive content as well as AI companies' misleading use of the phrase "artificial intelligence" in their advertisements. The posts argue that the FTC Act covers and prohibits these kinds of conduct. In February, the FTC also created an Office of Technology to "strengthen the [agency's] ability to keep pace with technological challenges in the digital marketplace." Further, the agency released a joint statement with the Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission outlining concerns about using automated systems for enforcing their respective laws.

FTC Commissioners have highlighted these concerns. Chairwoman Khan published an op-ed in the New York Times outlining the agency's commitment to protecting consumers and preventing anti-competitive behavior in the AI sector. FTC Commissioner Alvaro Bedoya stated his apprehensions about how entities have implemented AI into decision-making.

The FTC has begun possible enforcement against the AI sector. In July, the agency opened an investigation into OpenAI for potential violations of consumer protection laws. The agency is inquiring about potentially false and misleading statements made by ChatGPT, some of which may have resulted in "reputational harm." The FTC's Civil Investigative Demand raises concerns about the chatbot manufacturing—or "hallucinating"—false or disparaging statements about real individuals. The agency also requested information following an error that caused private user data to turn up in ChatGPT's results.

As AI continues to advance and integrate into our lives, regulators at the FTC and FCC will have to continue to determine the technology's role in what and how they regulate. Based on their track records so far, those may be starkly different approaches.

AUGUST 10, 2023

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.