Welcome to this week's issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.

The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates have exponentially increased the federal government's interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities.

This issue covers recent actions by the Consumer Financial Protection Bureau (“CFPB” or “Bureau”) on AI. The Federal Trade Commission is not the only consumer protection authority seeking to apply its existing enforcement authority to the domain of AI. The  CFPB, a body tasked by Congress with “implementing and enforcing Federal consumer financial laws,” has recently issued statements and a  proposed rule addressing how certain deployments of AI may violate consumer financial law.

Our key takeaways are:

  1. In May 2022, the  CFPB released a circular asserting that creditors that use AI “in any aspect of their credit decisions must still provide a notice that discloses the specific principal reasons for taking an adverse action.” Following up on this notice, a September 2023  CFPB circular asserts that creditors relying on AI models to make credit decisions must provide applicants subject to adverse actions with “specific” notices explaining the “principal reason(s) for the adverse action.”
  2. A June 2023  CFPB report warns financial institutions that they “risk violating legal obligations, eroding customer trust, and causing consumer harm when deploying chatbot technology.”
  3. The CFPB  proposed a rule in June 2023 that would mandate certain entities using automated systems make credit decisions that “adopt policies, practices, procedures, and control systems to ensure that [certain AI models] used in certain credit decisions or covered securitization determinations adhere to quality control standards designed to meet specific quality control factors.” The comment period on this rulemaking has now closed, but no final action has yet occurred.

Consumer Financial Protection Bureau Increases Scrutiny of AI

Much of the attention regarding the role of executive agencies in AI regulation has understandably focused on the Federal Trade Commission (“FTC” or “Commission”). Under the leadership of Chair Lina Khan, the FTC has filed a  complaint against an AI company and has  called on Congress to delegate more regulatory authority over AI to the Commission. But the FTC is not the only consumer protection authority seeking to apply its existing authority to AI.

The  Consumer Financial Protection Bureau (“CFPB” or “Bureau”), established in the wake of the 2008 financial crisis by the Dodd-Frank Wall Street Reform and Consumer Protection Act, is tasked by Congress with “implementing and enforcing Federal consumer financial laws; reviewing business practices to ensure that financial services providers are following the law; monitoring the marketplace and taking appropriate action to make sure markets work as transparently as they can for consumers; and establishing a toll-free consumer hotline and website for complaints and questions about consumer financial products and services.”

At the time of publishing, the Supreme Court has under submission Consumer Financial Protection Bureau v. Community Financial Services Association of America, Limited. The case hinges on whether the CFPB's funding model, whereby the agency receives funding from the Federal Reserve rather than Congress, is unconstitutional. If successful, the challenge may place the agency's future in jeopardy.

This and other recent challenges to the CFPB's authority have not prevented the agency from beginning to assert its authority over certain AI matters. Under the tenure of Director Rohit Chopra, a former FTC Commissioner, the CFPB has issued policy statements and a  proposed rule on the AI issues under the Bureau's purview. This newsletter analyzes and contextualizes these developments.

Joint Statement on AI

As discussed in a  previous newsletter, in April 2023, key consumer protection and law enforcement agencies, including the CFPB, issued a non-enforceable “ Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.”

The statement references a  May 2022 CFPB circular (discussed below) that emphasizes that creditors “who use complex algorithms, including artificial intelligence or machine learning, in any aspect of their credit decisions must still provide a notice that discloses the specific principal reasons for taking an adverse action,” or else risk violating the Equal Credit Opportunity Act (“ECOA”).

The Joint Statement characterizes this circular as meaning that “federal consumer financial laws and adverse action requirements apply regardless of the technology being used.” Furthermore, the Joint Statement asserts that “the fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.”

CFPB Guidance on Credit Denials by Lenders Using AI

In May 2022, the CFPB released a circular on “ Adverse action notification requirements in connection with credit decisions based on complex algorithms.”

In this document, the CFPB asserts that under the terms of the ECOA, creditors must provide statements to applicants against whom adverse action is taken, regardless of whether credit decisions were made using complex and inscrutable algorithms. These adverse actions include “denying an application for credit, terminating an existing credit account, making unfavorable changes to the terms of an existing account, and refusing to increase a credit limit.”

According to Regulation B of the ECOA, a statement for the reasons for adverse action “must be specific and indicate the principal reason(s) for the adverse action.” In this circular, the CFPB asserts that creditors that use AI “in any aspect of their credit decisions must still provide a notice that discloses the specific principal reasons for taking an adverse action.”

This requirement applies even to creditors using so-called “black box” algorithms, or algorithms whose decision-making process is not clear, even to the algorithm's own developers. As the circular states: “A creditor's lack of understanding of its own methods is therefore not a cognizable defense against liability for violating ECOA and Regulation B's requirements.”

Previous actions demonstrate that the CFPB can effectively move against perceived violations of the adverse action notification requirements. In September 2021, the Bureau sued  LendUp Loans, LLC (“LendUp”) in part for failing “to provide timely and accurate adverse-action notices.” In December 2021, the  CFPB announced that LendUp had agreed “to halt making any new loans and collecting on certain outstanding loans, as well as to pay a penalty…” LendUp ceased its loan operations in January 2022.

Lenders utilizing AI tools should be wary of LendUp's example and not underestimate the CFPB's willingness and capacity to pursue perceived violations of adverse action notification requirements.

Use of CFPB Sample Forms for Adverse Action Notices

As discussed earlier, the ECOA mandates that creditors that subject applicants to an adverse action (whether that be the denial of credit, terminating an existing credit account, or a number of other actions) must provide “specific” statements that indicate the “principal reason(s) for the adverse action.” Regulation B of the ECOA provides sample forms that creditors may, under certain circumstances, complete and send to applicants to fulfill the adverse action notification requirement.

A very recent September 2023  CFPB circular asserts that reliance on the checklist of reasons for adverse action “provided in the sample forms will satisfy a creditor's adverse action notification requirements only if the reasons disclosed are specific and indicate the principal reason(s) for the adverse action taken.” Institutions that make credit decisions on the basis of algorithms that “rely on data that are harvested from consumer surveillance or data not typically found in a consumer's credit file or credit application” may subject an applicant to an adverse action for a reason not listed in the Regulation B sample forms.

These creditors may not, according to this circular, “simply select the closest, but nevertheless inaccurate, identifiable factors from the checklist of sample reasons” included in Regulation B. Creditors making decisions on the basis of algorithms must instead provide specific reasons why they have subjected an applicant to an adverse action.

For example, if a creditor decides to close a consumer's credit line on the basis of an algorithm that collects data on the types of goods purchased by the customer, “it would likely be insufficient for the creditor to simply state ‘purchasing history' … as the principal reason for an adverse action. Instead, the creditor would likely need to disclose more specific details about the consumer's purchasing history or patronage that led to the reduction or closure, such as the type of establishment, the location of the business, the type of goods purchased, or other relevant considerations, as appropriate.”

Proposed Rulemaking on AI Home Appraisals

In June 2023, the CFPB and affiliated agencies  proposed a rule that would regulate the use of algorithms used to appraise the value of a home, or automated valuation models (“AVMs”).

As AI technology has improved, some mortgage loan originators have begun using automated valuation models to supplement or replace the work of human appraisers. Though some may consider the decisions made by these AVMs to be more impartial than those made by their human counterparts, the  CFPB asserts that “automated valuation models can make bias harder to eradicate in home valuations because the algorithms used cloak the biased inputs and design in a false mantle of objectivity.”

The proposed rule,  Quality Control Standards for Automated Valuation Models, would require the adoption of AVM quality control policies by relevant market actors. The rule applies to two types of entities. The first class of covered entity is “mortgage originators,” or any entity that takes a residential mortgage loan application, assists a consumer in obtaining or applying to obtain a residential mortgage loan, or offers or negotiates terms of a residential mortgage loan.1 The second class of covered entity is “secondary market issuers,” or “any party that creates, structures, or organizes a mortgage-backed securities transaction.”

The proposed rule would mandate that covered entities “adopt policies, practices, procedures, and control systems to ensure that AVMs used in certain credit decisions or covered securitization determinations adhere to quality control standards designed to meet specific quality control factors.” The rule would not set specific requirements for these rules, allowing “flexibility to set quality controls for AVMs as appropriate based on the size of the institution and the risk and complexity of transactions for which they will use AVMs covered by this proposed rule.”

On June 21, 2023, the CFPB opened a public comment period for the proposed rule. Two months later, the comment period closed, with the agency having received over 30 posted comments. We will continue to monitor developments related to this proposed rule.

Spotlight on the Use of AI Chatbots by Financial Institutions

In June 2023, the CFPB released a report entitled  Chatbots in consumer finance. Chatbots are “computer programs that mimic elements of human conversation.” With the developments in generative AI technology, chatbots have become more sophisticated and better able to handle complex inquiries.

The CFPB report found that financial institutions are “increasingly using chatbots as a cost-effective alternative to human customer service.” The report warned that financial institutions “risk violating legal obligations, eroding customer trust, and causing consumer harm when deploying chatbot technology.”

“With the growing use of chatbots by financial institutions,” notes the report, “complaints from the public [to the CFPB] increasingly describe issues people experienced when interacting with chatbots.” These include difficulties in resolving customers' disputes, provision of inaccurate information, and failure to provide meaningful customer assistance. In addition to these inconveniences, the CFPB claims that financial institutions' widespread deployment of chatbot systems can engender security and data privacy risks.

Given these risks, the CFPB asserts that financial institutions “run the risk that when chatbots ingest customer communications and provide responses, the information chatbots provide may not be accurate, the technology may fail to recognize that a consumer is invoking their federal rights, or it may fail to protect their privacy and data,” making these institutions out of compliance with federal consumer financial laws.

“The shift away from relationship banking and toward algorithmic banking,” notes the report, “will have a number of long-term implications that the CFPB will continue to monitor closely.”

We will continue to monitor, analyze, and issue reports on these developments.

Footnote

1. Certain exceptions and inclusions apply. Please reference 15 U.S.C. 1602(dd)(2) for a complete definition of “mortgage originator” as used in the proposed CFPB rule.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.