- within Technology topic(s)
- with Senior Company Executives, HR and Finance and Tax Executives
- in United States
- with readers working within the Technology and Media & Information industries
Insurers are increasingly using artificial intelligence (AI) systems across operations, including underwriting policies, pricing, claims processing, fraud detection and customer service. But with the broad use and deployment of automated decision-making technology comes heightened scrutiny. Regulators, consumer advocates and courts are sharpening their focus on fairness, transparency and accountability in AI-driven insurance decisions.
NAIC Model AI Bulletin
In 2023, the National Association of Insurance Commissioners (NAIC) adopted Model Bulletin: Use of Artificial Intelligence Systems by Insurers (Model AI Bulletin) providing the clearest roadmap yet for how state regulators expect insurers to adopt, govern and audit AI systems. However, this Model AI Bulletin sits alongside accelerating state-level mandates (e.g., New York, Colorado, California) and mounting litigation tied to opaque algorithmic decisions.
For insurers, the message is stark: every AI-enabled decision, especially adverse ones, will be examined for bias, interpretability and procedural fairness. Automation without accountability can lead to harm, and regulators expect human oversight, transparency and procedural fairness in AI deployments. The compliance bar is certainly on the rise. Below is a summary of regulatory expectations and enforcement priorities in light of state insurance department rules, and concrete steps to align your AI practices with regulatory and legal expectations.
According to the Model AI Bulletin, insurers using AI should include the following as part of their AI governance:
Documented Governance: Covering the development, acquisition, deployment and monitoring of AI tools, including those sourced from third-party vendors.
Transparency and Explainability: Ability to explain how AI systems function, including how inputs lead to specific outputs or decisions.
Consumer Notice: Disclose when AI systems are in use and provide appropriate levels of information based on the phase of the insurance life cycle in which the AI systems are deployed.
Fairness and Nondiscrimination: Evaluate AI systems for potential bias and unfair discrimination in regulated processes such as claims, underwriting and pricing – and proactively address them.
Risk-Based Oversight: Oversee AI systems used in high-stakes decisions (e.g., coverage denials or rate setting) and implement more robust documentation, controls and testing than tools used for back-end operations or consumer outreach.
Internal Controls and Auditability: Use independent audits, validation and regular reviews of AI model performance to demonstrate compliance and accuracy over time.
Third-Party Vendor Management: Manage third-party owned systems, since insurers are ultimately responsible for AI systems used throughout operations, including demonstrating due diligence and contractual safeguards for AI services.
State AI Rules for Insurers
While the Model AI Bulletin sets out a nationwide framework for the responsible use of AI, several states have moved ahead with their own measures. Among the most notable are California, Colorado and New York, each of which has adopted a distinct approach to regulating how insurers may deploy algorithmic tools in decision-making.
New York's DFS Circular Letter 2024-7 requires insurers to demonstrate that AI and external data systems do not proxy for protected classes or generate disproportionate adverse effects. Insurers must keep explanatory documentation, allow the Department of Financial Services to review vendor tools, require vendor audits and ensure internal oversight. The Letter bridges technical oversight with legal exposure, demanding tests for bias, internal logs for review and explainability for adverse outcomes.
Colorado Revised Statutes (C.R.S.) §10-3-1104.9 (and its implementing regulation) prohibits use of external consumer data sources and predictive models that result in unfair discrimination. Even though this statute applies to all insurers, the implementing regulation currently only applies to life insurers, and requires performance of quantitative testing to detect disparate impact, even if the data or model is facially neutral. However, effective October 15, 2025, C.R.S. §10-3-1104.9 will also include private passenger automobile insurance and health benefit plans.
California Health & Safety Code §1367.01/California Insurance Code §10123.135 restricts health care service plans or disability insurers from relying solely on automated tools in health care decisions — any adverse determination must be reviewed by a licensed clinician. It also requires disclosure when AI contributes to a decision, and ensures appeals processes are accessible. Finally, this code illustrates how state AI rules may limit or condition AI use in claims/medical settings, not just underwriting.
Recommendations for Insurers
Inventory of AI Systems and Risk Triage: Catalog every AI system in underwriting, pricing, claims, servicing, fraud and marketing. Rank each system by risk exposure (degree of decision impact, consumer harm potential, model opacity and reliance on external data).
Design Program with "Defensible by Documentation:" Build (or enhance) an AI system program designed for audit and challenge. Ensure each AI system has documentation: purpose, data sources, variable descriptions, performance metrics, drift controls, validation reports, versioning and change logs.
Validation and Bias Testing: For each model, run fairness assessments, sensitivity analysis, error rate audits, proxy tests and stress testing. Define action thresholds for remediation or disablement. Maintain full validation reports for internal and third-party reviews.
Vendor Management: Lawful sourcing is non-negotiable. Insist on access to model logic and indemnities. The use of unauthorized datasets exposes companies deploying AI systems to extraordinary liability — even if the downstream use could be argued as transformative. Insurers using AI systems should, therefore, negotiate contractual assurances or warranties that the AI system developer has conducted thorough reviews of its AI training inputs, and eliminated any reliance on questionable datasets such as gray-market repositories.
Explainability Infrastructure: Deploy reasoning modules, feature attribution methods (e.g., LIME), or surrogate models to support explanations. Embed human review for high-stakes decisions (e.g., coverage denial, claim rejection). Maintain trace logs that tie an output back through input features, model logic, thresholds and decision path.
Regulatory Filings: Be proactive in states with AI/underwriting rules, and file or certify AI utilization. Prepare a "regulator-ready" package for each high-impact AI system: validation, bias reports, oversight documentation, vendor audits and explanatory logic. Anticipate any marketing conduct examinations will ask about your compliance with NAIC's Model AI Bulletin and other related requirements.
Governance and Compliance: Define clear roles. AI is not a mere technical concern. Establish board-level oversight, executive committees, business owners, model risk managers and compliance officers. Ensure escalation paths for adverse outcomes or performance breaches. AI use must also be consistent with the Unfair Trade Practices Act, Unfair Claims Settlement Practices, Corporate Governance/Disclosure Acts, State Rating Laws and Market Conduct Authorities. In short, AI cannot be a shield from traditional fiduciary, anti-discrimination or rate adequacy obligations.
Insurance Department Bulletins Adopted
As of the date of this publication, the following states have adopted the NAIC Model AI Bulletin in full or in substantially similar form.
Alaska |
February 1, 2024 |
|
Arkansas |
July 31, 2024 |
|
California |
June 30, 2022 |
|
Colorado |
November 13, 2023 |
|
Connecticut |
February 26, 2024 |
|
Delaware |
February 5, 2025 |
|
District of Columbia |
May 21, 2024 |
|
Illinois |
March 13, 2024 |
|
Iowa |
November 7, 2024 |
|
Kentucky |
April 16, 2024 |
|
Maryland |
April 22, 2024 |
|
Massachusetts |
December 9, 2024 |
|
Michigan |
August 7, 2024 |
|
Nebraska |
June 11, 2024 |
|
Nevada |
February 23, 2024 |
|
New Hampshire |
February 20, 2024 |
|
New Jersey |
February 11, 2025 |
|
New York |
July 11, 2024 |
|
North Carolina |
December 18, 2024 |
|
Oklahoma |
November 14, 2024 |
|
Pennsylvania |
April 6, 2024 |
|
Rhode Island |
March 15, 2024 |
|
Texas |
September 30, 2020 |
|
Vermont |
March 12, 2024 |
|
Virginia |
July 22, 2024 |
|
Washington |
April 22, 2024 |
|
West Virginia |
August 9, 2024 |
|
Wisconsin |
March 18, 2025 |
As the AI landscape continues to evolve at a rapid pace, staying informed and compliant is more crucial than ever. At Buchanan, our Advanced Technology attorneys have deep industry experience and are following these developments closely.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.