ARTICLE
13 October 2025

When Algorithms Underwrite: Insurance Regulators Demanding Explainable AI Systems

BI
Buchanan Ingersoll & Rooney PC

Contributor

With 450 attorneys and government relations professionals across 15 offices, Buchanan Ingersoll & Rooney provides progressive legal, business, regulatory and government relations advice to protect, defend and advance our clients’ businesses. We service a wide range of clients, with deep experience in the finance, energy, healthcare and life sciences industries.
Insurers are increasingly using artificial intelligence (AI) systems across operations, including underwriting policies, pricing, claims processing, fraud detection and customer service.
United States Technology
Harry A. Valetk’s articles from Buchanan Ingersoll & Rooney PC are most popular:
  • within Technology topic(s)
  • with Senior Company Executives, HR and Finance and Tax Executives
  • in United States
  • with readers working within the Technology and Media & Information industries

Insurers are increasingly using artificial intelligence (AI) systems across operations, including underwriting policies, pricing, claims processing, fraud detection and customer service. But with the broad use and deployment of automated decision-making technology comes heightened scrutiny. Regulators, consumer advocates and courts are sharpening their focus on fairness, transparency and accountability in AI-driven insurance decisions.

NAIC Model AI Bulletin

In 2023, the National Association of Insurance Commissioners (NAIC) adopted Model Bulletin: Use of Artificial Intelligence Systems by Insurers (Model AI Bulletin) providing the clearest roadmap yet for how state regulators expect insurers to adopt, govern and audit AI systems. However, this Model AI Bulletin sits alongside accelerating state-level mandates (e.g., New York, Colorado, California) and mounting litigation tied to opaque algorithmic decisions.

For insurers, the message is stark: every AI-enabled decision, especially adverse ones, will be examined for bias, interpretability and procedural fairness. Automation without accountability can lead to harm, and regulators expect human oversight, transparency and procedural fairness in AI deployments. The compliance bar is certainly on the rise. Below is a summary of regulatory expectations and enforcement priorities in light of state insurance department rules, and concrete steps to align your AI practices with regulatory and legal expectations.

According to the Model AI Bulletin, insurers using AI should include the following as part of their AI governance:

Documented Governance: Covering the development, acquisition, deployment and monitoring of AI tools, including those sourced from third-party vendors.

Transparency and Explainability: Ability to explain how AI systems function, including how inputs lead to specific outputs or decisions.

Consumer Notice: Disclose when AI systems are in use and provide appropriate levels of information based on the phase of the insurance life cycle in which the AI systems are deployed.

Fairness and Nondiscrimination: Evaluate AI systems for potential bias and unfair discrimination in regulated processes such as claims, underwriting and pricing – and proactively address them.

Risk-Based Oversight: Oversee AI systems used in high-stakes decisions (e.g., coverage denials or rate setting) and implement more robust documentation, controls and testing than tools used for back-end operations or consumer outreach.

Internal Controls and Auditability: Use independent audits, validation and regular reviews of AI model performance to demonstrate compliance and accuracy over time.

Third-Party Vendor Management: Manage third-party owned systems, since insurers are ultimately responsible for AI systems used throughout operations, including demonstrating due diligence and contractual safeguards for AI services.

State AI Rules for Insurers

While the Model AI Bulletin sets out a nationwide framework for the responsible use of AI, several states have moved ahead with their own measures. Among the most notable are California, Colorado and New York, each of which has adopted a distinct approach to regulating how insurers may deploy algorithmic tools in decision-making.

New York's DFS Circular Letter 2024-7 requires insurers to demonstrate that AI and external data systems do not proxy for protected classes or generate disproportionate adverse effects. Insurers must keep explanatory documentation, allow the Department of Financial Services to review vendor tools, require vendor audits and ensure internal oversight. The Letter bridges technical oversight with legal exposure, demanding tests for bias, internal logs for review and explainability for adverse outcomes.

Colorado Revised Statutes (C.R.S.) §10-3-1104.9 (and its implementing regulation) prohibits use of external consumer data sources and predictive models that result in unfair discrimination. Even though this statute applies to all insurers, the implementing regulation currently only applies to life insurers, and requires performance of quantitative testing to detect disparate impact, even if the data or model is facially neutral. However, effective October 15, 2025, C.R.S. §10-3-1104.9 will also include private passenger automobile insurance and health benefit plans.

California Health & Safety Code §1367.01/California Insurance Code §10123.135 restricts health care service plans or disability insurers from relying solely on automated tools in health care decisions — any adverse determination must be reviewed by a licensed clinician. It also requires disclosure when AI contributes to a decision, and ensures appeals processes are accessible. Finally, this code illustrates how state AI rules may limit or condition AI use in claims/medical settings, not just underwriting.

Recommendations for Insurers

Inventory of AI Systems and Risk Triage: Catalog every AI system in underwriting, pricing, claims, servicing, fraud and marketing. Rank each system by risk exposure (degree of decision impact, consumer harm potential, model opacity and reliance on external data).

Design Program with "Defensible by Documentation:" Build (or enhance) an AI system program designed for audit and challenge. Ensure each AI system has documentation: purpose, data sources, variable descriptions, performance metrics, drift controls, validation reports, versioning and change logs.

Validation and Bias Testing: For each model, run fairness assessments, sensitivity analysis, error rate audits, proxy tests and stress testing. Define action thresholds for remediation or disablement. Maintain full validation reports for internal and third-party reviews.

Vendor Management: Lawful sourcing is non-negotiable. Insist on access to model logic and indemnities. The use of unauthorized datasets exposes companies deploying AI systems to extraordinary liability — even if the downstream use could be argued as transformative. Insurers using AI systems should, therefore, negotiate contractual assurances or warranties that the AI system developer has conducted thorough reviews of its AI training inputs, and eliminated any reliance on questionable datasets such as gray-market repositories.

Explainability Infrastructure: Deploy reasoning modules, feature attribution methods (e.g., LIME), or surrogate models to support explanations. Embed human review for high-stakes decisions (e.g., coverage denial, claim rejection). Maintain trace logs that tie an output back through input features, model logic, thresholds and decision path.

Regulatory Filings: Be proactive in states with AI/underwriting rules, and file or certify AI utilization. Prepare a "regulator-ready" package for each high-impact AI system: validation, bias reports, oversight documentation, vendor audits and explanatory logic. Anticipate any marketing conduct examinations will ask about your compliance with NAIC's Model AI Bulletin and other related requirements.

Governance and Compliance: Define clear roles. AI is not a mere technical concern. Establish board-level oversight, executive committees, business owners, model risk managers and compliance officers. Ensure escalation paths for adverse outcomes or performance breaches. AI use must also be consistent with the Unfair Trade Practices Act, Unfair Claims Settlement Practices, Corporate Governance/Disclosure Acts, State Rating Laws and Market Conduct Authorities. In short, AI cannot be a shield from traditional fiduciary, anti-discrimination or rate adequacy obligations.

Insurance Department Bulletins Adopted

As of the date of this publication, the following states have adopted the NAIC Model AI Bulletin in full or in substantially similar form.

Alaska

Bulletin B 24-01

February 1, 2024

Arkansas

Bulletin 13-2024

July 31, 2024

California

Bulletin 2022-5

June 30, 2022

Colorado

3 CCR 702-10

November 13, 2023

Connecticut

Bulletin No. MC-25

February 26, 2024

Delaware

Domestic and Foreign Bulletin No. 148

February 5, 2025

District of Columbia

Bulletin 24-IB-002-05/21

May 21, 2024

Illinois

Company Bulletin 2024-08

March 13, 2024

Iowa

Insurance Division Bulletin 24-04

November 7, 2024

Kentucky

Bulletin No. 2024-02

April 16, 2024

Maryland

Bulletin No. 24-11

April 22, 2024

Massachusetts

Bulletin No. 2024-10

December 9, 2024

Michigan

Bulletin 2024-20-INS

August 7, 2024

Nebraska

Insurance Guidance Document No. IGD - - H1

June 11, 2024

Nevada

Bulletin 24-001

February 23, 2024

New Hampshire

Bulletin Docket #INS 24-011-AB

February 20, 2024

New Jersey

Insurance Bulletin No. 25-03

February 11, 2025

New York

Insurance Circular Letter No. 7

July 11, 2024

North Carolina

Bulletin No. 24-B-19

December 18, 2024

Oklahoma

Bulletin No. 2024-11

November 14, 2024

Pennsylvania

Insurance Notice 2024-04, 54 Pa.B. 1910

April 6, 2024

Rhode Island

Insurance Bulletin No. 2024-03

March 15, 2024

Texas

Bulletin # B-0036-20

September 30, 2020

Vermont

Insurance Bulletin No. 229

March 12, 2024

Virginia

Administrative Letter 2024-01

July 22, 2024

Washington

Technical Assistance Advisory 2024-02

April 22, 2024

West Virginia

Insurance Bulletin No. 24-06

August 9, 2024

Wisconsin

Insurance Bulletin

March 18, 2025

As the AI landscape continues to evolve at a rapid pace, staying informed and compliant is more crucial than ever. At Buchanan, our Advanced Technology attorneys have deep industry experience and are following these developments closely.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More