On April 24, 2023, the Consumer Financial Protection Bureau (CFPB), Federal Trade Commission (FTC), Department of Justice (DOJ) and Equal Employment Opportunity Commission (EEOC) released a joint statement affirming that existing law – including those that prohibit discrimination, as well as unfair, deceptive, or abusive acts or practices (UDAAP) – applies to the use of automated systems and innovative new technologies, including artificial intelligence. Acknowledging the greater role of AI today and its potential benefits, such as increasing efficiencies and advancement in many arenas, the agencies suggest that AI may also "perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes." The agencies "reiterate[d] [their] resolve to monitor the development and use of automated systems and promote responsible innovation" and "pledge[d] to vigorously use [their] collective authorities to protect individuals' rights regardless of whether legal violations occur through traditional means or advanced technologies." In his remarks on the joint statement, CFPB Director Rohit Chopra further noted that "[c]ompanies must take responsibility for their use of these tools."

Potential for unlawful discrimination

The joint statement identifies the following as three potential sources of discrimination in automated systems:

  • Data and data sets: Automated system outcomes can be skewed by data that is unrepresentative or imbalanced, incorporates historical bias or has other errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.
  • Model opacity and access: Internal workings of automated systems are not clear to many people, which makes it difficult for developers, businesses and persons to determine if an automated system is fair.
  • Design and use: Developers may fail to understand or account for the contexts in which private or public entities will use their automated systems and may design a system on the basis of flawed assumptions.

In the CFPB's announcement accompanying the joint statement, Chopra said the CFPB is particularly scrutinizing the use of AI in advertising, underwriting and home valuation.

Federal focus on AI bias – continued and heightened

Each of the agencies has previously taken certain steps to demonstrate the import of ensuring AI is developed and used in a legally compliant manner that avoids discriminatory impact. As noted in the joint statement, in its 2022 Report to Congress on AI, the FTC urged that AI systems be "transparent, which includes the need for it to be explainable and contestable." That report, like the joint statement, stressed the need for entities to be able to explain the reasoning of algorithms behind individual decisions, with a particular focus on guarding against risk.

In January 2023, the DOJ filed a statement of interest asserting that alleged bias in a tenant screening algorithm violated the Fair Housing Act. Discussing the joint statement, Assistant Attorney General Kristen Clarke suggested the DOJ was ready to "hold accountable those entities that fail to address the discriminatory outcomes," reinforcing the focus on bias resulting from model decisions.

Focusing on the broader lending space, the CFPB published a May 2022 circular confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology being used. Like the joint statement, the circular asserted that the Equal Credit Opportunity Act (ECOA) and Regulation B require creditors to provide statements of specific reasons to applicants against whom adverse action is taken, effectively prohibiting the use of "black box" algorithms in such situations. The CFPB also will release a white paper this spring on the current chatbot market and the impact on consumers of its integration by financial institutions.

Commissioners at the FTC have previously taken an expansive interpretation of Section 5, asserting authority to target firms who use "black box" algorithms that result in bias. In recent years, the FTC has required firms to destroy algorithms or other work product that it asserted were trained on data from improperly collected data sets. The FTC previously warned that improper use of AI could violate Section 5 of the Federal Trade Commission Act, the Fair Credit Reporting Act and the ECOA.

The White House also has weighed in on the risk of discrimination, noting in its Blueprint for an AI Bill of Rights that "Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination." The White House further stated that protections should include "ongoing disparity testing and mitigation" and "[i]ndependent evaluation," among other required consumer protection precautions. The Government Accountability Office noted in its 2021 AI accountability framework that AI has the potential to "amplify existing biases and concerns related to civil liberties, ethics, and social disparities," and that the US government and others were working to address such concerns.

Implications for financial services firms

While the use of cutting-edge AI technology can result in demonstrable benefits – for businesses and consumers – the joint statement makes clear that financial institutions will need to understand the impacts of deploying AI throughout their operations. Thus, financial institutions reliant on AI, or reliant on third-party providers of services dependent on AI, should undertake efforts to assess fair lending compliance risks at the point of implementation – and monitor those risks throughout product development and deployment.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.