Building on prior guidance issued in 2020, the Federal Trade Commission (FTC) recently warned in a new blog post that it will use its authority under existing laws to take enforcement action against companies that sell or use algorithms or artificial intelligence (AI) technology that results in discrimination by race or other legally protected classes.

The agency urged companies developing or using AI to ensure their AI tools or applications do not result in biased outcomes because a failure to do so may result in "deception, discrimination—and an FTC [] enforcement action." The agency's latest pronouncement leaves no doubt that the FTC will be actively reviewing the market for potential bias or discrimination when AI-enabled applications and services are used to provide access to housing, credit, finance, insurance, or other important services.

As our readers know, AI is emerging as a transformative technology that is enabling new systems, tools, applications, and use cases. At the same time, perceived risks arising from potential bias, discrimination, or other negative outcomes is leading regulators to look more closely at both the benefits and potential risks of the technology. To that end, the FTC is moving quickly to assert itself as a leading regulator with authority to oversee a broad range of AI providers, systems, and applications on the market.

Basis of Potential AI-related FTC Enforcement Actions

Three statutes provide the FTC significant authority to act in this area. Specifically, Section 5 of the FTC Act prohibits unfair or deceptive practices. The FTC's latest statement suggests that the agency believes it can use Section 5 authority, for example, to penalize entities selling or using "racially biased algorithms."

Further, the agency also has authority to act under the Fair Credit Reporting Act (FCRA), which could be applied when an algorithm is used in a process that results in the denial of employment, housing, credit, insurance, or other benefits. Similarly, the Equal Credit Opportunity Act (ECOA)—which prohibits a company from using a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance—could be another basis for the agency to act. Thus, for example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA.

Notably, the FTC's blog post is framed as both guidance and a reaffirmation that the FTC has been policing issues around AI and big data for many years and sends a clear signal that it intends to do so going forward. This reinforces Acting Chair Rebecca Kelly Slaughter's recent speech on algorithmic discrimination in which she cited a study demonstrating that an algorithm used with good intentions—to target medical interventions to the sickest patients—ended up funneling resources to a healthier, white population, to the detriment of sicker, patients of color. She asked the FTC staff "to actively investigate biased and discriminatory algorithms" and expressed an interest "in further exploring the best ways to address AI-generated consumer harms."

Indeed, as we explained in recent blog posts, recent FTC enforcement actions reflect increased scrutiny of companies using algorithms, automated processes, and/or AI-enabled applications. The FTC's recent settlement with Everalbum is instructive in that it illustrates the agency's latest remedial tool: the so-called "disgorgement" of ill-gotten data.

In the recent enforcement case, the FTC alleged that Everalbum, an app developer that used photos uploaded by users to train its facial recognition technology, failed to properly obtain users' consent. The agency also alleged that Everalbum made false statements about the users' ability to delete their photos upon deactivating their accounts.

On these facts, the FTC secured a settlement and consent decree that required Everalbum to delete algorithms that used the data obtained without consent—a remedy that is akin to the "fruit of the poisonous tree" concept—and obtain consent before using facial recognition technology on user content.

The FTC's latest reaffirmation of its authority to act in this area demonstrates that the agency will hold businesses accountable for using AI that may result in biased outcomes or for making promises that the technology cannot deliver. Its message is clear: "Hold yourself accountable – or be ready for the FTC to do it for you."

FTC Best Practices on Using AI "Truthfully, Fairly, and Equitably" and Steps to Mitigate Potential Harms

The FTC's blog post also articulates key best practices to avoid potential legal jeopardy, which generally fall in to one of the following four categories:

  • 1. Use good data;
  • 2. Test algorithms for discriminatory outcomes;
  • 3. Do not exaggerate capabilities of the technology and be transparent; and
  • 4. Be accountable.

Use Good Data

The FTC explains that "a solid foundation" should be the right starting point for AI. That is, companies should conduct due diligence to examine datasets that are used to train AI.

If a data set is riddled with errors or is missing information about particular populations to which the AI model applies, discrimination would be inevitable even without any unlawful intent. Therefore, the FTC recommends that companies must think about ways to improve their datasets used to design their AI models to account for data gaps, and—in light of any shortcomings—limit where or how they use the model.

Test Algorithms for Discriminatory Outcomes

The FTC also points out that during PrivacyCon 2020, researchers presented work showing that algorithms developed for benign purposes (such as healthcare resource allocation and advertising) have actually resulted in racial bias. In order to ensure that a well-intentioned algorithm does not result in racial or gender inequity, the FTC states that it is essential to test the algorithm before use and periodically after that.

Do Not Exaggerate the AI System's Capabilities

As evident in the Everalbum case, the FTC warns companies not to overpromise what their algorithms can deliver. Statements companies make to consumers must be "truthful, non-deceptive, and backed by evidence."

Exaggerating or overstating what the algorithm can do may constitute deception, resulting in an FTC enforcement action. Additionally, the FTC recommends that companies should make their data and use of AI transparent, employ independent standards, open the data or source code to outside inspection, and publish the results of independent audits.

Be Transparent

The agency cites transparency practices as the key to uncovering certain biases in healthcare systems that were discovered by third-party researchers. The FTC urges other providers "to think about ways to embrace transparency and independence" through the use of transparency frameworks and independent standards, and by "conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection."

Be Accountable

Finally, the FTC explains that "unfair" practices under the FTC Act means a practice that "causes more harm than good." For example, when a company's algorithm targets consumers most interested in buying its products by considering race, color, religion, and sex, the result might be digital redlining that "causes more harm than good." In such cases, the FTC may have a basis to challenge the use of the AI model as unfair.

Beyond its recommendations on transparency and independence, the FTC makes clear that companies should not limit themselves to the FTC statements and must be proactive in holding themselves accountable for their algorithm's performance. Its message is unambiguous: "if you don't hold yourself accountable, the FTC may do it for you," regardless of whether discrimination is caused by a biased algorithm or simply by human misconduct.

Conclusion

The FTC's warning demonstrates that the agency is serious about holding businesses accountable for using biased AI or for making promises that the technology cannot deliver. Companies must keep their practices grounded in established FTC consumer protection principles as the agency is likely to take a stronger stance on harmful, biased, and unfair uses of AI.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.