ARTICLE
13 June 2025

Underwriting, Claims, Liability: Building AI Into Your Insurance Policies

MT
Miller Thomson LLP

Contributor

Miller Thomson LLP (“Miller Thomson”) is a national business law firm with approximately 500 lawyers across 5 provinces in Canada. The firm offers a full range of services in litigation and disputes, and provides business law expertise in mergers and acquisitions, corporate finance and securities, financial services, tax, restructuring and insolvency, trade, real estate, labour and employment as well as a host of other specialty areas. Clients rely on Miller Thomson lawyers to provide practical advice and exceptional value. Miller Thomson offices are located in Vancouver, Calgary, Edmonton, Regina, Saskatoon, London, Waterloo Region, Toronto, Vaughan and Montréal. For more information, visit millerthomson.com. Follow us on X and LinkedIn to read our insights on the latest legal and business developments.
Recent technological advancements in artificial intelligence (AI) are transforming insurance industry practices for both insurers and policyholders.
Canada Technology

Recent technological advancements in artificial intelligence (AI) are transforming insurance industry practices for both insurers and policyholders. AI can unlock greater efficiency in underwriting, pricing, and claims handling. Yet it also raises legal questions about how insurance policies are managed and how transparency obligations to policyholders are upheld.

Underwriting and claims management

AI-powered risk underwriting is a top use case for insurers. Insurers harness algorithms to handle big data analytics, while refining the policyholder's profile and adjusting premiums accordingly. This approach is effective but not foolproof. Flawed or incomplete analysis can leave policyholders with coverage that falls short or misses the mark.

With AI, developing dynamic insurance products is a snap, as terms (e.g., premiums) feature a sliding scale tied to the insured's behaviour. Auto insurance is a prime example, with certain premium adjustments derived from driving data gathered by mobile apps.

In claims management, AI is used to flag discrepancies, categorize claims by risk level, and provide claims adjusters with data-driven insights. This automation helps cut costs and handle claims faster. However, any faulty analysis can result in an unfair denial of coverage, with major consequences for the insured.

The use of AI raises certain concerns with regard to insurance coverage. In response to these risks, some insurers are updating their policies to include AI-related endorsements.

AI-related endorsements: A gradual insurance policy trend

Some insurers have begun to manage these risks more proactively by including endorsements into their policies. These clauses are intended to clarify the scope of coverage applicable to incidents arising from automated system or AI-powered tool use, including any resulting errors, intellectual property infringements or technical failures. Some companies stand to gain from this type of endorsement, as seen in a recent B.C. Civil Resolution Tribunal decision, which held a company liable for leading a consumer astray with inaccurate information provided by a chatbot.

AI-related endorsements aim to address the grey areas of algorithmic risk to protect the insured while mitigating the insurer's risk in the event of unforeseen claims. In Quebec, such clauses must be carefully drafted to uphold the principles of clarity, predictability, and good faith. As the Supreme Court of Canada ruled in Progressive Homes and reaffirmed in Ledcor, a vaguely worded clause can be held against the insurer, particularly where any language limiting coverage is unclear.

These endorsements embody a natural evolution in insurance law to embrace the digital age. Yet, a sophisticated understanding of their technical and legal considerations is required. Beyond contractual mechanisms like endorsements, introducing guardrails for AI also involves its own set of legal transparency requirements. These obligations are vital where decision-making is fully automated without human intervention.

Transparency, bias, and challenges: Québec requirements

Since the Act to modernize legislative provisions as regards the protection of personal information came into effect, organizations that make decisions based solely on automated processing must inform the individuals concerned. These organizations may also be required to provide an explanation of the factors backing their decision and offer the individual the opportunity to submit observations to an employee.

In insurance, these obligations become critical, particularly where a denial of coverage, policy termination or reduction in indemnity is involved. Any shortcoming could be construed as an infringement of the insured's rights or an unfair practice.

Addressing algorithmic bias remains a top priority. Poorly designed AI tools can turn out discriminatory decisions, particularly if the data set contains historical biases. In Quebec, such biases may contravene the Charter of human rights and freedoms or attract civil liability.

Bottom line

AI is transforming the way insurers underwrite risk, draft insurance policies and handle claims. Adopting AI-powered systems and tools drives innovation but requires heightened legal oversight. Managing risk proactively with transparent governance helps avoid contentious claims, bolster policyholder trust and ensure sustainable compliance in an ever-changing landscape.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More