ARTICLE
29 January 2026

Decision-making In Modern Financial Services – Using Tech While Staying Within The Lines

KL
Herbert Smith Freehills Kramer LLP

Contributor

Herbert Smith Freehills Kramer is a world-leading global law firm, where our ambition is to help you achieve your goals. Exceptional client service and the pursuit of excellence are at our core. We invest in and care about our client relationships, which is why so many are longstanding. We enjoy breaking new ground, as we have for over 170 years. As a fully integrated transatlantic and transpacific firm, we are where you need us to be. Our footprint is extensive and committed across the world’s largest markets, key financial centres and major growth hubs. At our best tackling complexity and navigating change, we work alongside you on demanding litigation, exacting regulatory work and complex public and private market transactions. We are recognised as leading in these areas. We are immersed in the sectors and challenges that impact you. We are recognised as standing apart in energy, infrastructure and resources. And we’re focused on areas of growth that affect every business across the world.
This article explores the impact of technology on how leaders in financial services firms make decisions. We ask: as managers seek to make decisions which leverage technology...
Worldwide Finance and Banking
Herbert Smith Freehills Kramer LLP are most popular:
  • within Transport, Antitrust/Competition Law and Employment and HR topic(s)
  • with Inhouse Counsel

With managers employing new technology to aid decision-making, we explore what steps are needed to ensure regulatory and supervisory expectations are met

This article explores the impact of technology on how leaders in financial services firms make decisions. We ask: as managers seek to make decisions which leverage technology, what steps are needed to ensure that financial services regulatory requirements and supervisory expectations are met?

In more analogue times, when data analysis was conducted by individuals, the inputs to decision-making could be described as less diverse and less all-embracing. Over time, technology has expanded the depth and breadth of data analysis. At the beginning of 2026, the use of AI in data analysis is well-established, with capabilities growing. The change in the nature of the input to decision-making is important – in theory, better decisions should be the result, but digitalisation has an impact on risks, potentially magnifying them.

In this article, we focus on the steps that decision-makers in financial services firms can take to ensure that, where they leverage the technology to make better decisions, they also take reasonable steps to guard against risks crystallising, in particular regulatory risk.

We look at three aspects – deploying AI agents, the use of cloud AI and the 'near yet far' new technology of quantum computing.

When planning to leverage technology in their processes, decision-makers must apply robust due diligence.

Marina Reason
Partner, London

The use of AI agents and regulatory expectations

Regulators in the financial services sector expect that where AI is used, firms are aware of the risks associated with its use and have taken appropriate steps to manage and mitigate those risks.

The risks posed by the use of AI are not new, but this technology has the potential to magnify/amplify both the risks and the impacts. For example, assessing the creditworthiness of potential borrowers has always been a feature of lending activity. Using AI to support creditworthiness assessment can improve accuracy (the AI agent can analyse a broader range of data) and do so at speed. However, as with traditional analogue creditworthiness assessments, there remains credit exposure/risk of default. Use of an AI agent may reduce the likelihood of that risk crystallising, but if there is an error in the AI system's processes, it could exponentially magnify the risk to which the firm is exposed.

AI agents

AI agents are Large Language Models (LLM)-based systems that autonomously make decisions, select tools, and execute complex tasks without step-by-step instructions. They continuously learn and adapt to new information and changes of conditions.

Currently, AI agents are principally used in the financial services sector for risk assessment, fraud detection, anti-money laundering measures and credit scoring. They are less prominent in decision-making at management level.

This may be because AI agents currently lack the ability to form nuanced, interconnected judgements about how causes and effects interact in real-world scenarios.

In summary, challenges associated with using AI agents include:

  • Understanding how it works – a lack of explainability ('black box') limits both the firm's and the user's ability to understand how an output was generated.
  • Model bias resulting from incomplete or skewed training data.
  • Reliance on third party providers which, unlike firms, may be outside of the regulatory perimeter and in another jurisdiction.1
  • Becoming over-reliant on the AI agent which can result operational vulnerabilities if governance controls are insufficient.

Regulators tend to highlight how their existing regulatory requirements address risks in an AI context:

  • In Germany, for example, the Federal Financial Supervisory Authority (BaFin) expects decisions, whether informed by AI or not, to be capable of explanation. Black box models are not, therefore, acceptable for meeting supervisory requirements; institutions must be able to provide insight into how their models function and why a given decision was generated. Further, firms must critically examine and verify AI-generated information through other sources, and document that process to ensure that the AI is producing responses that are factually accurate and grounded.
  • The Monetary Authority of Singapore (MAS) stresses the need for transparency (for example, through disclosures made and explanations given to customers about the use of AI that affects them) and explainability (facilitating understanding of AI-generated outputs or decisions, with a focus on the information and explanation needed by the users/customers which the particular AI use case is designed to serve). Where that AI use case could pose significant risk to the financial institution, the MAS recommends heightened management oversight, for example, through the establishment of a dedicated cross-functional committee.
  • The UK's Senior Managers and Certification Regime (SMCR) applies a technology-agnostic individual accountability framework to 'senior managers' of firms. The Prudential Regulation Authority (PRA) confirms that material uses of AI in a business activity, area or function should be recorded as falling within the scope of responsibilities of the relevant senior manager. In turn, that individual is expected to have sufficient understanding of the AI models being used and their data inputs so they can effectively evaluate risks.
  • The guidance on the use of generative AI language models issued by the Hong Kong Securities and Futures Commission (SFC) also stresses (in its first core principle) the importance of senior management oversight and responsibility, and the need for the governance framework to encompass the identification of high-risk use cases by taking into consideration any potential adverse client impact, particularly if the AI language model's output is inaccurate or inappropriate.
The risks from applying today's (and tomorrow's) technologies to financial services may not be novel, but the potential amplification of those risks should not be underestimated.

Timo Bühler
Partner, Germany

A common thread is that AI agents cannot be treated as autonomous decision-makers at management level. Senior leaders must continue to rely on professional judgment, understanding and effective human oversight when interpreting and acting on AI-generated outputs. They should take reasonable steps to ensure that the use of AI in supporting senior leader and/or board level decisions produces reliable results. Appropriate due diligence could include:

  • Implementing policies and procedures (including recordkeeping) around how the business approaches identifying where to deploy AI agents and making the decision to use AI in processes.
  • Conducting robust due diligence on third party service providers and contracting appropriately – see, for example, the requirements in Regulatory Technical Standards (RTS) issued under the EU Digital Operational Resilience Act (DORA), those in Prudential Standard CPS 230 issued by the Australian Prudential Regulation Authority (APRA) or those in the Supervisory Statement (SS) on outsourcing and third party risk management issued by the UK PRA.
  • Ensuring that the governance arrangements for AI deployments within the business include individuals with relevant skill sets.
  • Conducting post-deployment or post-implementation reviews which examine whether the AI system is delivering expected, compliant outcomes.
  • As with any system, once its use becomes business-as-usual, it should be subject to proportionate, appropriate monitoring and review, eg, by appropriately skilled compliance, risk and / or audit professionals, with adverse findings escalated in accordance with robust internal governance and risk management processes.

Cloud-based AI

As the Reserve Bank of Australia has observed, financial institutions are increasingly using public cloud infrastructure. The Hong Kong Monetary Authority (HKMA) noted in January 2026 that cloud-related projects account for around 80% of reportable technology outsourcing initiatives by banks in Hong Kong, with roughly one-third to one-half involving critical banking systems.2 Broadly speaking, combining cloud infrastructure with AI offers cost and resource efficiencies and scalability which are attractive to firms operating in ever more competitive environments. AI and cloud computing may look like a match made in (cyber) heaven, but, taken together, the composite risk profile may be exponentially greater than when taken individually. The fact that the data involved may be sensitive and / or stored on or backed-up to servers in another jurisdiction adds to the risk.

Public cloud platforms allow for the storage and management of data in external servers operated by third parties over the public internet.

The computing power and enhanced security offered by cloud technology is opening new frontiers in cloud-based GenAI software (Cloud AI). Cloud AI can facilitate technological efficiency and innovation while reducing in-house IT costs.

Reserve Bank of Australia, Financial Stablity Review, April 2025

Notably in Australia, the uptake of cloud AI has been a particular area of focus for regulators, in recognition of this risk profile. The fundamental risk management and due diligence questions, as articulated by APRA Executive Director of Cross-industry Risk, Chris Gower, remain broadly the same and include:

  • Are appropriate cybersecurity controls in place – is there good cyber hygiene?
  • Has the entity considered AI risk introduced by third parties?
  • Who has access to critical information and systems?

While using the public cloud offers potential benefits including scalability, reliance on third-party providers of cloud platforms to manage critical data poses a range of risks associated with the reduced transparency for both firms and their clients. There may be limited oversight of the third party's internal operations, security, and system weaknesses.

In Singapore, the MAS recognises that regulated firms may rely on third-party AI but stresses the need for firms to comply with existing regulatory expectations on managing the risks from outsourcing and use of third party services, and highlights key areas for firms to consider such as transparency, fairness, supply chain assessments, concentration risks and complexity.

The risks associated with Cloud AI continue to attract the attention of regulators. In November 2025, the Australian Securities and Investments Commission (ASIC) published its licensing and registration update report which stressed the potential harms associated with the use of AI by financial service providers – its impact on decision-making, including bias and discrimination; provision of false information; exploitation of behavioural biases and consumer vulnerabilities; and erosion of consumer protection and trust. Following the release of ASIC detailed report analysing 624 AI use cases in banking, credit, insurance and financial advice, both ASIC and APRA continue to urge firms to ensure their governance keeps pace with the accelerating adoption of AI, notably by filling in gaps in arrangements for managing AI-specific risks such as data quality and specific contestability arrangements for AI, and also to identify potential consumer harm. As indicated in its Corporate Plan, in 2026, APRA will be assessing the appropriateness of risk management and oversight practices of a group of larger firms to support reasonable adoption of AI across the financial system.

Concerns about the centralisation of sensitive data are also at the front of mind for governments, regulators and financial services firms, not least following news of a breach impacting the U.S. Office of the Comptroller of the Currency (OCC) in which hackers accessed supervisory systems leaving confidential bank information exposed. In Australia, APRA will continue to assess entities' compliance with its new prudential standard for operational risk management which covers risks associated with AI and cybersecurity as well as business continuity risks more broadly. The UK government's Guidance on multi-region cloud and software-as-a-service will also inform the approach of the UK regulators.

The legislative landscape is also shifting in ways that will impact firms' decision-making. In Australia, for example, the amended Privacy Act 1988 (Cth) – which mandates the 'open and transparent management of personal information' – is imposing new obligations on regulated Australian entities to make disclosure about how personal information is used in automated decision-making that could reasonably be expected to significantly affect the rights or interests of an individual.

The implementation of cloud AI requires firms, wherever they are based, to make an informed and sensitive decision considering the need to balance innovation and efficiency with consumer protection and regulatory compliance.

Quantum computing

Quantum computing – once realised – will deliver exponentially faster computing and increased capacity for complexity.

What is quantum computing?

While conventional computers perform calculations by encoding information as digital bits (i.e. '0s and 1s'), quantum computers work internally by using quantum bits (qubits). Unlike digital bits, qubits can hold two states at the same time – ie, simultaneously be in a position of 0 and 1. For some (but not all) important tasks, this would allow future quantum computers to find a solution to a problem using far fewer steps and calculations than a conventional computer. It could potentially also be used to seek to address previously unsolvable problems. The UK Digital Regulators Cooperation Forum (DRCF) describes quantum computing as a 'promising and diverse technology'. However, there are a number of challenges – both technical and engineering – to resolve before the potential of quantum computing (and other quantum technologies) can be fully realised.

For more on quantum computing, see our article 'Teetering on the brink of quantum utility – Not if, but when ...'

As quantum utility gets closer, policymakers are beginning to turn their minds seriously to this technology. For example, the UK government, in its National Quantum Strategy said that it aims to position the UK, over the next decade, as a 'quantum superpower'. A UK Financial Conduct Authority (FCA) October 2025 research paper identified the use of quantum computing within financial services as a UK growth opportunity, describing its potential to 'transform critical operations, create competitive advantage, and help position the UK as a world-leading hub for financial innovation'.

For decision-makers in financial services firms, this vastly increased speed over complex data presents both opportunities and risks. Last year's FSR Outlook article focused on the risks that quantum utility pose for existing cryptographical solutions and the need for financial services firms to focus on the work necessary for migration to quantum-safe cryptography as an immediate priority. The G7 Expert Group, co-chaired by the Bank of England and the U.S. Treasury, released a roadmap for transitioning to post-quantum cryptography in the financial sector in January of this year, underscoring the importance of this risk for policymakers and regulators.

As with other new technologies, decision-makers in firms must be mindful that quantum computing may amplify existing dynamics (for example, data governance and security, design of financial models, market conduct, and consumer disputes) in novel ways. Explainability is likely to prove a particular challenge in respect of quantum use cases.

Footnotes

1. Vgl. u.a. BaFin - 1. Digitalisierung

2. HKMA circular "Practice Guide on Cloud Adoption", 8 January 2026 – the practice guide provides enhanced guidance and shares good industry practices to assist banks in adopting cloud technology responsibly.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More