- with Senior Company Executives, HR and Finance and Tax Executives
- with readers working within the Business & Consumer Services, Healthcare and Transport industries
As organizations rapidly adopt generative and agentic AI across their operations, an important counter-movement is emerging: insurers are apparently seeking to limit or exclude coverage for AI-related risks. This shift would mark a pivotal moment for corporate risk management, one that many companies may not have yet fully appreciated.
As reported by the Financial Times (behind paywall), several major insurers have begun filing for regulatory approval in the United States to introduce broad exclusions tied to the use of AI tools. Some of these proposed exclusions would apply to any use — or even alleged use — of AI, or to any product or service that incorporates AI in any way. For businesses deploying modern software, this could translate into a significant reduction in available protection.
Behind this trend lies a deeper concern. AI models remain difficult to audit, unpredictable in their behavior, and often opaque even to their creators. When something goes wrong, the chain of responsibility can span developers, model providers, integrators, and end users, making liability hard to pinpoint. Insurers also worry about a scenario that is unusual in most other lines of coverage: the possibility of a single model failure triggering losses across thousands of policyholders simultaneously. Traditional actuarial models are not designed for this kind of correlated, systemic risk.
Recent incidents involving AI-generated misinformation, automated customer-service tools making inaccurate statements, or sophisticated impersonation frauds have only heightened these concerns. While individually manageable, they illustrate how unpredictable AI behavior can be — and how difficult it is for insurers to assess where future claims might arise.
To navigate this uncertainty, some insurers have begun introducing AI-specific endorsements. These sometimes clarify how existing policies apply to emerging AI regulations or offer narrow protections for limited scenarios. However, they also frequently impose new caps or carve-outs that reduce the scope of coverage. The overall direction is unmistakable: more restrictive language and a shrinking insurable risk.
For businesses integrating AI into their products or operations, the implications are significant. Risk management cannot rely solely on transferring exposure to an insurer. Instead, companies must revisit contractual allocations of responsibility with vendors and partners, strengthen internal governance of AI systems, and ensure clear oversight of how AI tools are deployed. Insurers themselves increasingly evaluate these governance structures when determining whether — and to what extent — a business remains insurable.
Ultimately, the retreat of insurers underscores a broader reality: AI risk has become a strategic concern. As systems grow more autonomous and pervasive, organizations must develop a more mature, holistic approach to managing the associated uncertainties. Insurance will continue to play a role, but it can no longer be assumed to provide a comprehensive backstop.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]