President Biden's artificial intelligence (AI) executive order (EO) encourages independent regulatory agencies — which are outside presidential authority and cannot be commanded — to employ "their full range of authorities" to protect consumers from fraud, discrimination, privacy invasions, and other injuries from AI. This encouragement endorses the view that the laws enforced by agencies like the Consumer Financial Protection Bureau (CFPB) (for more on the CFPB, please see our Advisory), the Federal Trade Commission (FTC), and the Securities and Exchange Commission (SEC) extend to AI-caused harms.

The agencies certainly have taken this position. As FTC Chair Lina Khan has explained, "Technological advances can deliver critical innovation — but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition."

The FTC, in particular, has broad jurisdiction over most economic sectors and a wide-ranging power at its disposal. Section 5 of the FTC Act broadly prohibits "unfair or deceptive acts in or affecting commerce"; authorizes the agency to stop such acts; and, in some cases, penalize the offenders. Under this authority, the FTC has advised businesses "to avoid using automated tools that have biased or discriminatory impacts." It has warned companies — especially their marketing teams — not to make "false or unsubstantiated claims" about the efficacy of AI products, noting "that some products with AI claims might not even work as advertised in the first place." And it (aggressively) asserts that Section 5's prohibition can apply to making, selling, or using any tool "that is effectively designed to deceive — even if that's not its intended or sole purpose."

For its part, the SEC has enforced the Investment Advisers Act of 1940 against so-called "robo-advisers," which offer automated portfolio management services. For example, in twin 2018 proceedings, the SEC found that robo-advisers had made false statements about investment products and published misleading advertising. The agency also has warned robo-advisers to consider their compliance with the Investment Company Act of 1940 and Rule 3a-4 under that statute. Robo-advisers that fail to adhere to Rule 3a-4 requirements (for instance, by insufficiently soliciting and incorporating client feedback into the investment strategy) are subject to liability pursuant to different obligations under the Investment Company Act.

Public companies listed on U.S. exchanges also may find themselves subject to liability under the Securities Exchange Act of 1934 for material misstatements or omissions regarding their AI systems' capabilities or risks from their development, distribution, or deployment of AI systems.

We expect the FTC, SEC, and other independent regulators — reinforced with the EO's encouragement — both to craft new guidance and to crack down further on asserted violations of these statutes related to AI systems.

Echoing recent statements by SEC Chair Gary Gensler, the EO also calls out "risks to financial stability" as among the risks it encourages the independent agencies to address "using their full range of authorities." Chair Gensler has suggested that reliance on AI could encourage destabilizing herd behavior. He has said that agency guidance on model risk management needs to be updated to address this risk. Chair Gensler has discussed the threat to stability with other regulators, suggesting that potentially broader action may be coming as well.

In addition, some independent agencies have begun the process of devising new rules to clarify AI-related prohibitions under the statutes they enforce. FTC Act Section 5 underlies the "commercial surveillance" rulemaking the FTC commenced in August 2022. The agency began with 95 questions regarding privacy, data security, and automated decision-making (ADM). Some of the questions concern the reliability of ADM systems and may portend rules on ADM accuracy, validity, reliability, and error — in addition to algorithmic discrimination, which also is covered. It seems likely the FTC will propose new regulations in coming months.

In August 2023, the SEC proposed a new rule to prevent conflicts of interest arising out of broker-dealers' and investment advisers' use of AI systems and other tools for predictive data analytics. The proposal would apply to "technologies that optimize for, predict, guide, forecast, or direct investment-related behaviors or outcomes. ... This could include providing investment advice or recommendations, but it also encompasses design elements, features, or communications that nudge, prompt, cue, solicit, or influence investment-related behaviors or outcomes from investors." Thus, the proposal would govern behavioral prompts and social engineering, such as providing curated research or targeting risk-tolerant investors. The SEC also intends to reach a broad range of AI systems, including commercial off-the-shelf systems if used by a broker-dealer or investment adviser "to draft or revise advertisements guiding or directing investors or prospective investors to use its services." Significantly, it would not suffice under the proposed rules merely to disclose such conflicts. Rather, the rules would require firms affirmatively to identify and eliminate the conflicts, even if they arise from so-called "black-box" systems.

Next week, in an effort to curb fraudulent or merely annoying robocalls and robotexts, the Federal Communications Commission will consider opening an inquiry into the impact of AI technologies on this problem.

The EO recommends that agencies "consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI, including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities' ability to explain their use of AI models." Consistent with this recommendation, additional rulemakings to establish consumer AI protections under existing statutes seem likely.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.