ARTICLE
28 August 2025

Insights From The BMA's Discussion Paper On Responsible Use Of Artificial Intelligence In Bermuda's Financial Sector

A
Appleby

Contributor

Appleby is one of the world’s leading offshore law firms, operating in 10 highly regarded and well-regulated locations. We provide comprehensive, expert advice and services across a number of key practice areas. We work with our clients to achieve practical solutions whether from a single location or across multiple jurisdictions.
The Bermuda Monetary Authority (BMA) recently published a discussion paper on 30 July, 2025: The Responsible Use of Artificial Intelligence in Bermuda's Financial Sector (Discussion Paper)...
Bermuda Finance and Banking

The Bermuda Monetary Authority (BMA) recently published a discussion paper on 30 July, 2025: The Responsible Use of Artificial Intelligence in Bermuda's Financial Sector (Discussion Paper), which provides an overview of Artificial Intelligence (AI) applications, an assessment of global regulatory approaches, sector-specific risks and opportunities presented by AI and an outline of potential pathways for developing an appropriate regulatory framework.

The BMA is taking a consultative approach to ensure that the proposed regulatory framework is fit for purpose while adhering to international standards. The deadline for feedback is 30 September 2025.

As the global financial services industry is integrating AI within their business functions, the BMA recognizes the need for effective regulation that is both practical and achievable by creating a regulatory environment that harnesses the potential of AI rather than stifling technological advancement.

Financial institutions operating in Bermuda that leverage AI-solutions, which include insurers, will need to consider AI in its existing risk management frameworks, including governance and oversight.

Key Takeaways

  • Accelerated Growth of AI in Financial Services – AI is already being leveraged within core business functions across the financial industry.
  • Risks of AI – AI poses risks including biased outcomes, cyber security threats, privacy violations and the destabilization of critical market functions.
  • Potential Pathways for BMA Regulatory Framework – BMA aims to design a technology neutral and outcomes-focused regulatory approach and proposes a comprehensive yet practical risk management framework emphasizing board accountability, proportionate risk management and integration within existing regulatory structures.

Uses and Risks of AI

AI can be utilized in many areas of application including but not limited to: automation of complex processes, document preparation, optimization of portfolio allocations through AI algorithms, claims management, compliance monitoring, catastrophe modelling and underwriting. A 2022 BMA survey showed that in the insurance sector, cybersecurity topped the list in the current use of AI applications.

The Discussion Paper also identifies several risks and challenges that financial institutions must manage and consider when adopting AI technology in order to minimize these risks and to ensure that they remain accountable for the safe and fair use of AI.

Overview of Potential Regulatory Framework

Based on the BMA's analysis, as further set out in the Discussion Paper, the BMA proposes an outcomes-based risk management framework which emphasizes governance and oversight, with ultimate accountability remaining with the board. The proposed framework addresses AI identification and inventory management, alongside risk assessment across the following key areas:

  • Impact Severity: The potential consequences of system failure or malfunction on customers, business operations, and the broader financial system, with externally deployed systems generally presenting higher risk profiles than internal systems.
  • Autonomy and Human Oversight: The degree of human involvement in decision-making and intervention capability, particularly for critical operations, important business services, and regarding interfaces between AI systems and external stakeholders, including the risk that operators acting only as a 'fail-safe' become over-reliant on model outputs, reducing vigilance (known as 'automation bias').
  • Complexity and Explainability: The transparency, interpretability and explainability of the AI model and its decision-making processes.
  • Data Sensitivity: The nature and sensitivity of personal and financial information being processed by the system.
  • Deployment Context and Scale: The operational environment (internal staff support versus direct customer or market interface) and the scope of deployment, including the number of customers, transactions, or business processes potentially affected.

The Discussion Paper also outlines governance and risk considerations regarding: robust data management, model development and validation, human oversight requirements, explainability and fairness considerations, ongoing monitoring and change management, third-party risk management, generative AI-specific controls, and cybersecurity and operational resilience measures.

The implementation of the AI governance framework should reflect the proportionality principle, considering institutional scale, complexity, risk appetite, AI maturity and customer relationship types, which is consistent with the BMA's general approach to supervising the financial sector.

Conclusion

The proposed framework focuses on board accountability, proportionate risk management, and integration within existing regulatory structures opposed to creating separate AI-specific obligations. This is in order to reduce compliance complexity while still addressing the risks posed by the adoption of AI systems.

The BMA recognizes Bermuda's diverse financial marketplace and varying financial institutional capabilities as the framework applies the proportionality principle to scale governance requirements based on a financial institution's business profile, size, complexity, AI maturity and the nature of customer interactions.

Next Steps

The Discussion Paper seeks to gather feedback from key industry stakeholders to directly inform the development of a more tangible proposal for the development of AI Policy in Bermuda in order to balance technological innovation with appropriate safeguards.

The BMA will analyze stakeholder feedback and issue further consultation papers on this topic in Q1 2026, with a view to issue a final proposal in Q3 2026. The Discussion Paper can be found here.

The deadline for feedback is 30 September 2025.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More