ARTICLE
3 November 2025

AI Adoption Without Safeguards: A Growing Risk For Insurers

BJ
Browne Jacobson

Contributor

Social and environmental impact are at the top of the business agenda. At Browne Jacobson, we’ve always worked across business and society, and this expertise sets us apart. Here, we champion fairness, make the complex simple and forge connections between clients to find creative solutions. This is how we improve outcomes for every person, community and business we serve.

Law needs all voices to reflect the society it serves. We’re working towards social mobility, diversity and inclusion in our firm and our profession. And we’re focusing on well-being and individuality so that all our people can thrive.

We are seeing an uptick in the use of artificial intelligence (AI) tools in business; companies and organisations are adopting routine use of AI bots and increasing integration of AI into standard practices.
United Kingdom Technology
Jeanette Flowers’s articles from Browne Jacobson are most popular:
  • in United States
  • with readers working within the Insurance industries
Browne Jacobson are most popular:
  • within Technology, Strategy and Family and Matrimonial topic(s)

We are seeing an uptick in the use of artificial intelligence (AI) tools in business; companies and organisations are adopting routine use of AI bots and increasing integration of AI into standard practices.

This step into the future, albeit exciting, comes with risks. Moody's new survey has found that nearly a quarter of businesses surveyed have no rules in place to govern the safe use of AI tools.

The survey investigated almost 2,000 organisations on how they're safeguarding AI in the workplace. It showed that 22% of these organisations said that they have no policies in place, leaving them "vulnerable to data breaches and loss of competitive advantage".

Data breach, supply chain and cybersecurity risks

Public AI tools such as OpenAI's ChatGPT or Google's Gemini often process data on external servers. Should companies submit proprietary information into such tools, they could open themselves to risks such as data and confidentiality breaches, expose sensitive data or even reputational risk.

These third-party software providers are often intertwined in a complex network of third-party vendors and suppliers, causing serious consequences should one of the members' defences in the supply chain be vulnerable to attack, which could in turn pass through the entire supply chain.

Moody's research also showed that many of the organisations they rate "are falling victim to cyberattacks, primarily owing to indirect incidents via third-party suppliers, partners or service providers".

Despite the dangers, Moody's survey revealed that 14% of organisations have never reviewed their vendors' cybersecurity practices, with defence against ransomware being "patchy," finding only 78% of organisations scan their back up data for ransomware or other malware.

What this means for insurers

In the current climate, where cyberattacks are rife and the use of AI tools is on the rise, it is imperative that internal policies are in place to mitigate such risks.

Insurers should take care in the cyber cover to ensure that AI risk has been considered appropriately, along with other sectors, such as PI and MLP, which are likely to be exposed. Insurers may also want to review their pre-inception questionnaires and underwriting criteria to take account of the practices that insureds have in place (or not, as the case may be!).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More