ARTICLE
5 May 2025

FCA Issues Update On AI Sprint

LS
Lewis Silkin

Contributor

We have two things at our core: people – both ours and yours - and a focus on creativity, technology and innovation. Whether you are a fast growth start up or a large multinational business, we help you realise the potential in your people and navigate your strategic HR and legal issues, both nationally and internationally. Our award-winning employment team is one of the largest in the UK, with dedicated specialists in all areas of employment law and a track record of leading precedent setting cases on issues of the day. The team’s breadth of expertise is unrivalled and includes HR consultants as well as experts across specialisms including employment, immigration, data, tax and reward, health and safety, reputation management, dispute resolution, corporate and workplace environment.
In January 2025, the FCA hosted a two-day AI Sprint. It discussed the opportunities and challenges of AI in financial services. People took part from across industry, academia...
United Kingdom Finance and Banking

In January 2025, the FCA hosted a two-day AI Sprint. It discussed the opportunities and challenges of AI in financial services. People took part from across industry, academia, regulators, technology providers and consumer representatives and discussed how AI may develop in financial services over the next 5 years, and the FCA's role enabling firms to embrace the benefits of AI while also managing the risks. It has now published a summary of the event. Four common themes came from participants' discussions and suggestions:

Regulatory clarity

Participants highlighted the importance of firms understanding how existing regulatory frameworks apply to AI. They suggested areas where the FCA could clarify, or build on, existing requirements to help firms understand regulatory expectations and to support beneficial innovation.

Trust and risk awareness

Participants saw trust in AI as vital for its successful adoption. They discussed that if firms and consumers felt able to trust AI, then they might use it more, including buying into new AI use cases and consumers engaging with new offerings. Participants agreed that without trust, the full benefits of AI in financial services wouldn't be realised.

Collaboration and coordination

Participants emphasised that all parties involved in AI needed to work together to develop solutions. This includes domestic and international regulators, government, financial services firms, academics, model developers and end users.

Safe AI innovation through sandboxing

Participants appreciated the need for a safe testing environment to encourage responsible innovation. Suggestions included using the FCA's sandboxes and innovation services to create a safe space, as well as providing access to datasets for innovators to develop and improve AI solutions.

Participants explored the next five years of AI in financial services as well as the current financial services regulatory regime. They considered increasing personalisation for consumers as well as increasing automation for firms. They also considered the conditions needed for safe and responsible AI adoption, such as being clear on success measures, solid foundations and collaboration. Factors that contribute to safe AI adoption include measurable success criteria, model/data/cloud/tech foundations, staff upskilling and internal governance, common standards and interoperability.

When it comes to the current regulatory framework, three themes were discussed: robust processes within firms, good outcomes for customers and effective competitive markets.

Next steps

The FCA will focus on the following key issues raised by participants:

The need for a safe space to innovate

The FCA is launching the Supercharged Sandbox, offering innovators greater computing power, infrastructure, datasets, and mentorship to support the testing and validation of AI solutions. The programme is being shaped by insights from the Sprint and identified focus areas to address key adoption challenges. The goal is to equip innovators with the necessary tools, expertise, and regulatory engagement to drive AI adoption in financial services.

Consider any areas of uncertainty

The FCA is considering whether there are areas of uncertainty where regulation could be restricting safe and responsible AI adoption. For example, one identified area of uncertainty was around data protection and privacy.

International engagement

The FCA will take steps to make sure that it effectively influences the international standard-setting bodies work on AI to support safe and responsible adoption in the UK.

Collaboration opportunities

The FCA will be engaging bilaterally with other regulators to explore cross-cutting considerations as well as collaboration opportunities on specific themes.

Communication and continued engagement

Finally, it will provide clear information about the FCA's approach to AI and continuing engagement with stakeholders via its AI Lab.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More