ARTICLE
31 October 2024

Balancing Act: Managing AI Governance Risks In Financial Services

This article will explore the transformative impact of AI in the financial sector, particularly focusing on the challenges and opportunities these technologies present. AI in finance is not just a future concept.
China Technology

This article will explore the transformative impact of AI in the financial sector, particularly focusing on the challenges and opportunities these technologies present. AI in finance is not just a future concept but is already very much a reality. We are now living in a world where AI is helping us decide who is able to get a mortgage, who is flagged for potential fraud, and how financial markets move. AI is becoming integral to the financial industry, from chatbots that may sound more human than actual humans, to algorithms that can help approve loans within moments. But as AI becomes so instrumental to decision-making in finance, what happens if AI starts making decisions that we can't fully understand or control?

Although the opportunities of AI in the financial industry are incredibly exciting, there are also considerable compliance challenges in integrating AI fully, including increased possibilities of manipulation, cyberattacks, and difficulties in model explainability and accountability. How do we manage risks associated with the use and adoption of AI without impeding innovation within the industry? This topic is a discussion beyond technology, but speaks to the heart of ethics, regulation and trust.

We will explore how financial institutions can build AI systems that are not only powerful but responsible — and how organizations can mitigate key data, security, and compliance risks while fostering a balance between innovation and regulatory compliance.

Navigating a Complex and Diverse AI Regulatory Landscape

Though the use of AI in the financial sector is exciting, it also comes with the inevitable wave of regulations.

Europe

We've already seen the enactment of the EU AI Act,1 which categorizes AI based on risks with more material obligations posed on providers of "high risk" AI systems and potential fines for noncompliance of up to 7 percent of annual global turnover (or up to 35 million euros). AI use cases in finance may be categorized as "high risk" under the EU AI Act in certain circumstances where they are likely to have a significant impact on individuals and society.

With the EU AI Act, we are still waiting to see if we experience a "Brussels effect"2 similar to EU GDPR 3 which reshaped how we think about data privacy across the world. Will the EU AI Act set global standards and influence how AI is regulated internationally? Some commentators argue that the EU AI Act is effectively a product safety law, with a principal aim of ensuring AI products on the market are safe to use and ensure they do not pose any significant risks to consumers by meeting certain regulatory standards. However, the EU AI Act scope is arguably wider, with the aim to govern the development, deployment and use of AI systems within the EU, taking into account larger individual, societal and ethical considerations and harms.

Rest of World

There have also been notable AI regulatory developments in other parts of the world, including the U.S., Asia and the U.K. In the U.S., there is currently no overarching federal AI regulation and, unsurprisingly, there seems to be a focus on self-regulation, decentralization and innovation. However, notwithstanding this, building on the momentum from the Blueprint for an AI Bill of Rights,4 the Biden AI Executive Order5 and the NIST AI Risk Management Framework,6 there does seem to be a patchwork of AI state laws starting to emerge, as well as increased scrutiny from agencies like the FTC focusing on ensuring AI is transparent and ethical.

Other countries are also trying to strike this careful balance between AI innovation and regulatory control, and this very much remains an evolving space. For example, the U.K. focuses on a context sensitive and balanced pro-innovative approach to addressing AI risks. Its AI regulatory framework relies on existing sector-specific laws for AI guidance rather than comprehensive new regulations, guided by five key principles: safety, transparency, fairness, accountability and redress. In Asia, you have China's proactive, scalable and controlled approach to AI regulation and Singapore's pragmatic approach, while in South America the "Brussels effect" ideologies seem to be potentially challenged.

Regulatory Fatigue?

One of the key challenges with the current AI regulatory landscape is the possible regulatory lag, with technology continuing to rapidly evolve in this space and regulations potentially struggling to keep apace or to really address or comprehend the real potential harms of AI.

Additionally, it remains an ongoing challenge for global organizations to deal with this complex, diverse and rapidly evolving regulatory landscape. With an influx of data-led regulations coming out over recent years, as well as significant compliance efforts already adopted internally to comply with the wave of new privacy laws, organizations are arguably suffering from compliance fatigue. With hundreds of AI laws, guidelines and frameworks currently in force or proposed globally, how do organizations realistically keep on top of this evolving and complex regulatory landscape and put in place the relevant and optimum controls to appropriately mitigate key AI risks?

The next sections focus on identifying some of the key risks and challenges facing the financial services in this area and outlining options for implementing operational controls to mitigate such risks.

AI Regulatory Risks and Considerations

There are a myriad set of risks and challenges that impact organizations when trying to adopt Responsible AI — some of the key themes are explored below. It is incumbent on organizations, as well as AI developers and regulators, to really understand the complex landscape of AI risks (including present, future, known and unknown) so we can develop the necessary guardrails to mitigate the risks while ensuring the true value of AI is realized.

  1. Data Privacy and Security

In the financial industry, there is immense value held on data — with it being treated as a critical asset that supports business enablement, digitalization and enhancement of customer experiences — essentially providing organizations with a strategic advantage. However, even as data is viewed as a strategic business asset, it equally holds significant risk with potential for huge liabilities if mishandled or processed in a noncompliant manner. AI systems are trained on a vast amount of data, and financial systems hold a significant amount of sensitive personal and financial data, including credit scores, transaction data and identity details, but with this there is increased risk of data breaches and unauthorized access. The algorithms may themselves reveal personal data inadvertently or allow for reidentification of anonymized data. As organizations integrate more AI into their operations, this introduces potential new vulnerabilities and entry points for attackers by increasing the surface area for attack. Malicious actors may manipulate data and weaponize it for malicious purposes by, for example, reverse engineering a trained model to obtain information about its training data or poison data during the training phase, causing models to learn incorrect behavior.

  1. Transparency, Explainability and Accountability

With AI, the privacy challenge is not only about protecting personal data but also about understanding what data is being used by which AI systems, who controls that data, and how that data is being used and shared. It is imperative for organizations to ensure that the use and sharing of data aligns with both regulatory and consumer expectations. An extensive amount of personal data sets can be used to train AI models, but often these AI models are described as "black boxes" because it is difficult to explain how decisions were made, even to the developers who designed the models. Further, should the personal data that is used to train these systems be collected without explicit user consent or anonymization, this may further violate privacy laws. There is a complex interplay between data usage, transparency and consumer consent. However, while regulators are demanding transparency and explainability in AI systems, to comply will likely lead to reduced AI performance. There is a potential trade-off that has to be subtly negotiated by regulators and innovators where the most sophisticated and powerful AI systems may be, by their very design, more opaque. Further, while lack of transparency and explainability presents a significant risk, an additional risk is one of lack of accountability. Even if the decision can be explained, it is also important to identify — if the decision is challenged — who would be accountable for the decision? And if something goes wrong, who is liable?

  1. Bias and Discrimination

One of the most significant risks in AI-driven financial systems is bias, which could lead to unfair lending practices, discriminatory credit decisions or inaccurate fraud detection. Algorithms can unintentionally favor or disadvantage certain groups, often reflecting historical prejudices present in the data used for training. AI bias can occur due to a number of reasons, including biased data sets, poor model design, lack of diversity in AI development teams, and limited oversight or testing for fairness, reinforcing inequalities and systemic discrimination in the financial sector. For example, there have been noted examples of gender bias where women were offered significantly lower credit limits than men even when they had similar credit profiles. This can have severe legal and ethical consequences. Moreover, once AI systems are deployed they can further reinforce or exacerbate longstanding historical inequalities in society, where certain groups continue to be underserved and denied access to financial support due to use of traditionally skewed data points to determine credit worthiness, for example, postal code data or educational background data. The question then arises whether the design of AI systems should also aim to help correct longstanding societal biases and reengineer financial fairness?

  1. Operational Resilience

Even as the transformative capabilities of AI are significant in the financial sector and the benefits are vast, can wide adoption of AI actually lead to increased costs and additional financial risk? As AI automates more processes, the assumption is that it will reduce operational costs and the potential of human error, thereby reducing risk; however, paradoxically, with increased reliance on technology which becomes integral to key financial operations, any possible failure in that system could cascade significant risks organization wide and lead to unintended systemic risks. In addition, as we become over reliant on technology and AI, where humans are less involved in the processes, it may become even more difficult to respond to AI failures when they happen. Further, an AI system's capabilities are only as good as the data it is trained on, and its capabilities evolve over time as it learns from new data. However, this leads to potential risks of "model drift," with reduced performance if critical financial predictions and decisions are being made based on outdated or inaccurate AI models. This can lead to financial and regulatory implications, exposing financial institutions to huge, widespread risks. With reduced human oversight and overreliance on AI systems, it is critical for financial institutions to establish robust AI risk management practices, including ongoing model validation and monitoring of performance and the critical role of human involvement.

Operationalizing AI Governance

In this section, we will review the extent to which regulation can keep pace with AI's rapid development and how global financial institutions navigate this fragmented regulatory landscape while staying compliant.

The opportunities of AI in financial services are significant; however, in order to fully realize this potential the industry has to be proactive and intentional when it comes to managing the risks in a pragmatic way.

Taking into account the OECD AI principles,7 as well as some of the key themes underlying a significant number of the recent AI legislation and guiding frameworks, organizations can consider the below steps when it comes to operationalizing AI governance and developing robust frameworks to support the adoption of responsible AI across the entire AI system lifecycle — from development to deployment to ongoing monitoring and accountability.

Themes

Example Operational Steps

AI Governance Framework

  • Establish a framework of policies, procedures, and roles and responsibilities for AI governance, development and use.
  • Develop an AI governance committee (e.g., ethics board) and AI Code of Ethics.
  • Track regulatory updates and determine key obligations impacting the organization.
  • Conduct ongoing training and upskilling of employees on AI governance including, e.g., developing training programs, providing continuous learning opportunities and delivering interactive workshops.
  • Draft technical documentation required to demonstrate compliance with rules relating to AI systems to competent national authorities.

AI Risk Management

  • Establish risk assessment processes to assess AI risks before deployment of AI use cases, particularly in high impact areas.
  • Review, map and categorize all AI systems used, including third-party systems, and implement system logs for record-keeping.
  • Identify priority high-risk systems for remediation.
  • Complete required assessments for all system types (e.g., Privacy Impact Assessments, Fundamental Rights Impact Assessment, Ethics Impact Assessments, etc.)
  • Develop processes, procedures and tools to help identify, test and remediate current and future risks.
  • Ensure systems perform consistently for their intended purpose.
  • Assess AI activities where third-party vendors and suppliers are involved. Update third-party risk management policies and procedures to address AI risks, including determining adequate contractual protection and liability mechanisms.

Data Governance, Privacy and Security

  • Review model training data to ensure data input and output meet requirements concerning data quality and accuracy.
  • Ensure AI systems comply with internal privacy policy requirements and regulatory standards where personal data and sensitive information are involved, including e.g., purpose limitation, data minimization, lawfulness and confidentiality.
  • Establish data governance structures, including clear policies on data usage, storage and access controls.
  • Implement appropriate security measures to mitigate risk of incidents.
  • Set up an AI red team function to identify, assess and help mitigate security vulnerabilities in AI systems.

Bias and Fairness

  • Keep up to date on fast-moving research on identifying, assessing and mitigating bias risks in AI systems.
  • Establish metrics and criteria to evaluate potential bias in algorithms to ensure AI systems do not disproportionately favor or harm specific groups.
  • Establish policies and responsible processes to detect and mitigate bias.
  • Determine technical tooling solutions as well as operational practices to help identify and mitigate bias.

Transparency, Explainability and Human Oversight

  • Ensure AI systems are developed and performed in a consistently transparent manner so users: a) know when they are interacting with AI; b) understand what data is being used to train AI models; and c) are enabled to understand system output.
  • Adopt explainability tools and techniques to help identify what led the model to reach decisions.
  • Guide senior management and employees in the design, implementation and improvement of their human oversight approach.
  • Implement human-in-the loop systems and assign accountability within the organization for AI decisions.
  • Provide targeted training to employees directly involved in the development and use of AI models.

Monitoring and Auditing

  • Conduct regular internal and external audits of AI systems to ensure compliance with regulatory standards, legal requirements and ethical considerations.
  • Track and evaluate metrics of AI systems to ensure compliance levels are met.
  • Implement systems that track AI performance in real time, flagging anomalies, bias or deviations from expected behavior.
  • Develop a process to respond to information requests and complaints.
  • Manage engagement with competent national authorities and regulating bodies, including reporting of serious incidents and high-risk AI use.

Shaping the Future of AI in Financial Services

The opportunities of AI in financial services are vast and exciting — more personalized banking, faster decision-making, real-time fraud prevention — but regulators, organizations and industry bodies must work together to help design AI systems that are transparent, accountable and fair. Developing the AI governance landscape in the financial sector is a journey, and we are all part of shaping its future to ensure it serves customers fairly and ethically in the digital age.

Read Past Raising the Bar Issues

Footnotes

1 "European Union's Artificial Intelligence Act, Regulation (EU) 2024/1689," https://artificialintelligenceact.eu/the-act/.

2 Jan Stappers, "What is the Brussels Effect?", Navex, February 13, 2024, https://www.navex.com/en-us/blog/article/what-is-the-brussels-effect/.

3 "EU General Data Protection Regulation (EU) 2016/679," https://gdpr.eu/tag/gdpr/.

4 "Blueprint for AI Bill of Rights," October 2022, https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

5 "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

6 "Artificial Intelligence Risk Management Framework (AI RMF) 1.0," National Institute of Standards and Technology, January 26, 2023, https://www.nist.gov/itl/ai-risk-management-framework.

7 The OECD AI Principles– OECD/LEGAL/0449 (Adopted May 22, 2019; Amended May 3, 2024), https://oecd.ai/en/ai-principles.

Originally published 29 October 2024

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More