The use of artificial intelligence ("AI") algorithms is increasing across insurance markets but poses real and specific risks of liability for companies attempting to streamline decision making. For example, health insurance companies are under significant scrutiny for their use of algorithms in determining coverage. Algorithms are essentially a set of instructions that tell a computer how to operate on its own and are especially useful for industries that must process voluminous amounts of data and render decisions. Though AI is touted for increasing productivity, it is also susceptible to bias and may generate more risks than benefits.
For insurers, AI can be helpful in evaluating risks, e.g. predicting the likelihood of future claims, as well as setting policy pricing and detecting fraud in insurance applications. However, major insurance companies are facing lawsuits resulting from the use of AI algorithmic tools in underwriting and claims processing. A class action lawsuit against Cigna Health and Life Insurance ("Cigna") is ongoing in the United States District Court for the Eastern District of California. Kisting-Leung v. Cigna Corp., No. 2:23-CV-01477-DAD-CSK, 2025 WL 958389 (E.D. Cal. Mar. 31, 2025). There, the plaintiffs allege Cigna utilizes an algorithm-based tool called PxDx that reviews health insurance claims and compares procedure codes with Cigna's list of approved diagnosis codes for that procedure. Id. at *1. The plaintiffs contend that the use of this algorithm resulted in the denial of claims for medically necessary procedures without actual review by a medical director or physician, as required by California law. Id. at *2.
Many states have adopted legislation and policies to limit and govern the use of AI by insurance companies. In Oklahoma, the state's Insurance Department issued Bulletin 2024-11 that provides guidelines for insurers utilizing AI systems to ensure compliance with all applicable insurance laws and regulations and fair-trade practices. As the use of AI algorithms becomes more common, and the risks of discrimination and unjust results become evident through litigation, more states will adopt comprehensive legislation to govern the use of such tools by insurance companies to ensure fair claims handling. Insurance companies are not alone in their use of AI for streamlining decisions; financial lenders and hiring employers should also closely monitor their use of algorithmic tools to avoid biased and improper outcomes that could result in significant litigation risk.
Originally Published by The Journal Record
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.