On July 18, 2023, Federal Reserve Vice Chair for Supervision Michael Barr cautioned banks against fair lending violations arising from their use of artificial intelligence (AI). Training on data reflecting societal biases; data sets that are incomplete, inaccurate, or nonrepresentative; algorithms specifying variables unintentionally correlated with protected characteristics; and other problems can produce discriminatory results.

Vice Chair Barr's remarks come against a backdrop of increased concern among federal financial regulators about financial institutions deploying AI and other automated systems. These authorities fear that deploying these systems without appropriate guardrails can lead financial institutions to breach various laws and to take on unnecessary risks to their safety and soundness. Similar fears are shared across the federal government, causing leaders of key U.S. federal enforcement agencies this spring to stress their common intent to crack down against algorithmic discrimination.

Vice Chair Barr welcomed the appropriate use of new AI technology, recognizing its potential to leverage digital data sources at scale and at low cost to expand access to credit. However, because AI use also carries risks of violating fair lending laws and perpetuating disparities in credit transactions, Vice Chair Barr called it "critical" for regulators to update their applications of the Fair Housing Act (FHA) and Equal Credit Opportunity Act (ECOA) to keep pace with these new technologies and prevent new versions of old harms. He noted that AI can violate fair lending laws through both disparate treatment (treating credit applicants differently based on a protected characteristic) and disparate impact (apparently neutral practices that produce different results based on a protected characteristic). As an example, Vice Chair Barr cited digital redlining, with majority-minority communities or minority applicants being denied access to credit and housing opportunities. He also called out reverse redlining, with "more expensive or otherwise inferior products" being pushed to minority communities.

Relatedly, Vice Chair Barr also mentioned expected changes to the implementing regulations under the Community Reinvestment Act (CRA), which was enacted following the FHA and ECOA to further address redlining and other systemic inequities in access to credit, investment, and banking services. As part of CRA exams, bank examiners evaluate whether there is any evidence of discriminatory or other illegal credit practices inconsistent with helping to meet community credit needs. Vice Chair Barr noted the interagency work on adapting CRA regulations and evaluations to address technological advancements in banking.

Vice Chair Barr also supported two recent policy initiatives to address appraisal discrimination and bias in mortgage transactions. On June 1, 2023, the Federal Reserve and several federal financial agencies issued a Notice of Proposed Rulemaking on the use of AI and other algorithmic systems in appraising home values (Proposed Rule). And on June 8, 2023, the same agencies invited public comment on guidance to assist financial institutions incorporate "reconsiderations of value" into their home appraisal process, which could help mitigate the risk of improperly valuing real estate.

With all these changes (and others in the pipeline), financial institutions should consider establishing comprehensive AI risk-management programs now and keep a close watch on the evolving legal landscape. Financial institutions that don't take heed of regulators' warnings may face problems in compliance exams or be subjected to investigations for their use of AI. If federal banking regulators identify patterns or practices of discrimination in a financial institutions' AI, it could result in referrals from those regulators to the Department of Justice for enforcement.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.