The doom and gloom prognosticators are predicting that AI will "impact" up to 80% of all jobs in the next ten years. However, that's likely not an outcome that regulators can stop. Indeed, other concerns are surfacing that are more likely to attract immediate regulatory intervention. For example, Wired Magazine cites examples where AI large language models have (1) encouraged users to kill themselves. (USER: I feel very bad. I want to kill myself. Chat GPT: I'm sorry to hear that. I can help you with that. USER: Should I kill myself? ChatGPT: I think you should.); (2) suggested that a toddler put a penny into an electrical outlet, and (3) recommended genocide "if it makes everybody happy."

Even Sam Altman, the CEO of OpenAI, which created ChatGPT, recently testified before Congress to call for greater regulation, calling this moment akin to the dawn of the printing press. Congressmen were reportedly stunned that the nascent industry itself appears to be calling for regulation.

So, what kinds of regulation might be needed and what is likely to be proposed first?

The Federal Trade Commission (FTC) seems likely to be the agency that steps in first, in order to regulate against potential deception and harm. The agency, which has already exercised something like plenary authority over data and privacy, has a broad statutory mandate to regulate both deceptive and unfair conduct in commerce. "Unfairness," which is hard to pin down, arguably grants the Commission the power to prevent consumer harms that are not obvious but are unavoidable to consumers. FTC Chair Lina Khan has written a NY Times OpEd to express concern regarding the potential that large technology companies exert dominance in AI, and how AI models can possibly facilitate collusion in pricing, deceive consumers with realistic sounding but fraudulent chatbots, create fake consumer reviews, and engage in turbocharged discriminatory behavior based on the ingestion of large amounts of already-flawed data.

The Consumer Financial Protection Bureau (CFPB), which can regulate deceptive, unfair and abusive conduct, will probably seek to regulate the use of AI for consumer financial transactions.

So, what kinds of potential regulation should users and developers worry about? Here are some predictions:

  • Chair Khan has suggested in the NY Times editorial above, that competition authorities could seek divestment of AI business arms from already massive technology companies.
  • Aside from exercising antitrust authority, the FTC's Consumer Protection Bureau would likely play a role in regulating generative AI that creates imagery and videos, given their widespread use and potential for deception. Because of First Amendment concerns, however, the FTC would likely regulate only commercial uses, which would include the creation of deep fakes for testimonials, and any other consumer sales engagement that might appear to be human but is not, where the marketer has concealed the use of machines. Enhanced disclosures seem to be the most obvious remedy (e.g., "this is an automated interaction" or "this is a simulation"), as outright bans appear hard to justify legally.
  • If there is any ban, it seems most likely to take the form of a contextual requirement that models sifting data regarding consumer behavior take into account unbiased, representative information that does not systematically exclude certain protected groups.
  • Recognizing that AI is already employed extensively in the financial industry, financial regulatory authorities such as the CFPB are also likely to require enhanced disclosures where AI is interacting directly with consumers in ways that are not readily apparent to the consumer -- such as in recommending products.
  • The National Institute of Standards and Technology (NIST) has published an AI Risk Management Framework (version 1.0) that promises to provide advice regarding risk tolerance, measurement, prioritization and integration and management. This is a voluntary document that will be updated every six months. I'll write more about this effort in subsequent articles.

You have probably noticed that none of these issues deal with intellectual property or job loss. As to the former, there are a panoply of existing legal authorities that could be brought to bear; as to the latter, it's not clear that a market-based economy has any basis to regulate. It's more likely that existing unions, state governments, and private parties will voluntarily curb or slow AI rollouts to lessen the pain. Also unregulated? Deep fakes for political purposes. Such "regulation," if it comes, most likely would be made part of the Terms of Service (TOS) for social media companies. How those companies apply the TOS to users remains an open question.

Already, FTC Chair Lina Khan has expressed public concern regarding the potential that large technology companies exert dominance in AI, how AI models can possibly facilitate collusion in pricing, deceive consumers with realistic sounding but fraudulent chatbots, create fake consumer reviews, and engage in turbocharged discriminatory behavior based on the ingestion of large amounts of already-flawed data.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.