On March 29, 2021, US federal financial regulators1 issued a request for information ("RFI") on financial institutions' use of artificial intelligence ("AI"). The RFI seeks information on the use of AI by financial institutions for a variety of purposes, including fraud prevention, customer service, credit decisioning, and other internal and external operations. In particular, the regulators seek to understand the challenges and opportunities that AI use cases pose with respect to governance, risk management, and controls. Comments are due by June 1, 2021. In this Legal Update, we discuss notable takeaways from the RFI and summarize the areas in which the regulators seek feedback.
Summary of RFI Topics
The use of AI by financial institutions is not new, from using AI to analyze contracts and financial statements, to oversight of traders, to chatbots to communicate with customers, AI and machine learning provide financial institutions efficient opportunities to analyze large volumes of data and improve customer service, among other benefits. As AI use cases evolve, financial institutions will face new opportunities and challenges to meet regulatory expectations with respect to safety and soundness and consumer protection. In the introduction to the RFI, the regulators highlight three areas that could pose risk management challenges: (1) explainability-how the AI uses inputs to produce outputs; (2) data usage-including potential bias or limitations in data; and (3) dynamic updating-the ability to validate AI use cases in a constantly changing environment.
The RFI seeks industry feedback on the following topics:
- Explainability: How do institutions manage AI explainability risks, the types of post-hoc methods institutions use to evaluate conceptual soundness, and the types of use cases that present particular explainability challenges.
- Bias in data (including raw and alternative data): How do institutions manage risks related to data quality and data processing, and are there specific AI use cases where alternative data are effective.
- Overfitting (where the algorithm learns from data that is under representative): How do institutions manage the risks of overfitting.
- Cybersecurity: Whether institutions have identified particular cybersecurity risks related to AI and the types of controls that can be employed.
- Dynamic updating (where the AI has the capacity to update on its own): How institutions manage risks related to dynamic updating, particularly validation, monitoring and tracking.
- AI in community institutions: Whether community institutions face particular challenges in developing, adopting, or using AI.
- Third-party oversight: The challenges institutions face in using AI developed or provided by third parties.
- Fair lending: The ability to evaluate compliance of AI-based credit decisions with fair lending laws; risk of bias or discriminatory impact of AI; application of model risk management principles; challenges of performing fair lending risk assessments on AI; and ability to identify reasons for adverse actions from AI decisions.
Focus on fair lending: The largest set of questions in the RFI relate to fair lending considerations associated with the use of AI, including challenges that institutions face in evaluating bias and discriminatory impact on protected groups, and the potential limitations of model risk management principles in making those determinations. The RFI also seeks information on how institutions comply with requirements under the Equal Credit Opportunity Act and its implementing regulation, Regulation B, to notify consumers of the reason(s) for taking an adverse action on a credit application where the reason for a decision made by an AI-powered decision engine may not be transparent. The latter has been of particular interest to the CFPB. In June 2020, the CFPB reminded institutions of the "regulatory uncertainty" in this space, and encouraged institutions to use the Bureau's trial disclosure program, no action letter program, and compliance assistance sandbox to potentially address the regulatory uncertainty associated with AI and adverse action notice requirements.2
Although regulators continue to express concerns about fair lending risks associated with AI,3 they have yet to articulate expectations for financial institutions' use of these technologies or how they will be evaluated in the future. For example, if an AI or machine learning model has a disparate impact on a prohibited basis, what documentation will regulators accept to demonstrate that the model is supported by a legally sufficient business justification? The RFI provides institutions with an opportunity to provide feedback on the type of fair lending testing and other techniques financial institutions are utilizing to evaluate and mitigate fair lending risks in AI and machine learning models.
Precursor to additional oversight: The tone of the RFI reflects regulators' concerns about the explainability and auditability of AI systems, including how regulators will test these systems going forward. International financial regulators have been seeking information about financial institutions' use of AI for years, and some regulators have already begun issuing guidance and scrutinizing AI use cases. For example, in 2019 the Hong Kong Monetary Authority published two sets of risk management guidelines for the use of big data and AI.4 Among other things, these principles encourage financial institutions to maintain audit logs associated with the design of AI, provide avenues for consumers to request information about decisions made by AI applications, and ensure that AI models produce "objective, consistent, ethical and fair outcomes to customers." The Monetary Authority of Singapore ("MAS") published a set of principles governing the use of AI and data analytics in Singapore's financial sector.5 MAS has been partnering with financial institutions to test its "FEAT" (fairness, ethics, accountability, and transparency) principles against actual AI use cases through Project Veritas. This public-private partnership resulted in the development of open source metrics to help financial institutions test fairness in the use of AI for credit risk scoring and consumer marketing.6 The UK Information Commissioner's Office has proposed an AI auditing framework that its investigations teams will use when evaluating the compliance of organizations using AI, and has encouraged entities to use this framework to audit their own AI systems.7
Although it is not clear whether US regulators will follow these international trends, the RFI suggests that the US financial regulators are, at a minimum, beginning to think strategically about issues of fairness, governance, and risk management in anticipation of potential future guidance or regulation.
Reminder of existing regulations and guidance: The RFI contains a "non-exhaustive" list of laws, regulations, supervisory guidance, and agency statements that are relevant to AI, such as the agencies' longstanding model risk and third-party risk management guidance. While some of these statements contain broad-based principles, the piecemeal nature of this laundry list of guidance highlights the challenges financial institutions face in constantly retrofitting old regulations and guidance to new products and services.8 For example, the interagency guidance on model risk management was published almost a decade ago and articulates supervisory expectations for how institutions should evaluate conceptual soundness. However, models and associated model risk management has evolved over the last 10 years, and as highlighted in the RFI, current industry practices for evaluating conceptual soundness now incorporate post-hoc methods. Similarly, existing agency statements on managing third-party risk address model-related issues, such as maintaining intellectual property rights and negotiating access and audit rights, in a cursory manner that does not contemplate the unique features of AI use cases, including dynamic updating and alternative data. The RFI provides financial institutions with an opportunity to help shape an updated framework designed to address this constantly evolving space.
If you are interested in submitting comments on these important topics, please contact us. Comments are due by June 1, 2021.
1 The RFI was jointly issued by the Board of Governors of the Federal Reserve System, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, National Credit Union Administration, and Office of the Comptroller of the Currency.
2 Patrice Alexander Ficklin, Tom Pahl, and Paul Watkins, CFPB Blog, Innovation spotlight: Providing adverse action notices when using AI/ML models (July 7, 2020), available at https://www.consumerfinance.gov/about-us/blog/innovation-spotlight-providing-adverse-action-notices-when-using-ai-ml-models/.
3 Id. See also Federal Deposit Insurance Corporation, FIL-82-2019, Interagency Statement on the Use of Alternative Data in Credit Underwriting (Dec. 13, 2019), available at https://www.fdic.gov/news/financial-institution-letters/2019/fil19082.pdf.
4 Hong Kong Monetary Authority, Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions (Nov. 5, 2019), available at https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2019/20191105e1.pdf; Hong Kong Monetary Authority, High-level Principles on Artificial Intelligence (Nov. 1, 2019), available at https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2019/20191101e1.pdf.
5 Monetary Authority of Singapore, Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore's Financial Sector (Nov. 12, 2018), available at https://www.mas.gov.sg/~/media/MAS/News%20and%20Publications/Monographs%20and%20Information%20Papers/FEAT%20Principles%20Final.pdf.
6 Monetary Authority of Singapore, Veritas Initiative Addresses Implementation Challenges in the Responsible Use of Artificial Intelligence and Data Analytics (Jan. 6, 2021), available at https://www.mas.gov.sg/news/media-releases/2021/veritas-initiative-addresses-implementation-challenges.
7 Information Commissioner's Office, Guidance on the AI auditing framework: Draft guidance for consultation (Feb. 14, 2020), available at https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf.
8 Even the agencies appear to struggle when stretching old laws to new issues. See our Legal Update on the agencies recent proposal for new cyber incident notification requirements that is based in part on laws that have not been updated since the early 1980s: https://www.mayerbrown.com/en/perspectives-events/publications/2020/12/new-incident-notification-requirements-proposed-by-federal-regulators-for-us-financial-institutions-and-their-service-providers.
Visit us at mayerbrown.com
Mayer Brown is a global legal services provider comprising legal practices that are separate entities (the "Mayer Brown Practices"). The Mayer Brown Practices are: Mayer Brown LLP and Mayer Brown Europe - Brussels LLP, both limited liability partnerships established in Illinois USA; Mayer Brown International LLP, a limited liability partnership incorporated in England and Wales (authorized and regulated by the Solicitors Regulation Authority and registered in England and Wales number OC 303359); Mayer Brown, a SELAS established in France; Mayer Brown JSM, a Hong Kong partnership and its associated entities in Asia; and Tauil & Chequer Advogados, a Brazilian law partnership with which Mayer Brown is associated. "Mayer Brown" and the Mayer Brown logo are the trademarks of the Mayer Brown Practices in their respective jurisdictions.
© Copyright 2020. The Mayer Brown Practices. All rights reserved.
This Mayer Brown article provides information and comments on legal issues and developments of interest. The foregoing is not a comprehensive treatment of the subject matter covered and is not intended to provide legal advice. Readers should seek specific legal advice before taking any action with respect to the matters discussed herein.