ARTICLE
15 January 2026

The Insurance Implications Of AI In Your Business: Is Your Current Coverage Keeping Up?

Wa
Ward and Smith, P.A.

Contributor

Ward and Smith, P.A. is the successor to a practice founded in 1895.  Our core values of client satisfaction, reliability, responsiveness, and teamwork are the standards that define who we are as a law firm.  We are an established legal network with offices located in Asheville, Greenville, New Bern, Raleigh, and Wilmington. 
Artificial intelligence is no longer a future consideration for business owners. It is a present reality. Businesses across every industry in North Carolina and beyond are deploying AI tools for customer service...
United States North Carolina Technology
Angela P. Doughty’s articles from Ward and Smith, P.A. are most popular:
  • with readers working within the Law Firm industries
Ward and Smith, P.A. are most popular:
  • within International Law topic(s)

Artificial intelligence is no longer a future consideration for business owners. It is a present reality. Businesses across every industry in North Carolina and beyond are deploying AI tools for customer service, data analysis, content generation, hiring decisions, medical diagnostics, financial modeling, manufacturing processes, and much more. The efficiency gains are significant, and the competitive pressure to adopt AI is intense.

But AI also creates new risks, including risks that most business insurance programs were not designed to address. When an AI system makes a recommendation that harms a customer, produces output that infringes someone’s intellectual property, generates biased decisions that result in discrimination claims, or processes data in a way that violates privacy laws, the question becomes: does your existing insurance cover those scenarios?

For many businesses, the answer is uncertain at best and “no” at worst. This article explores the insurance implications of AI deployment, examines how existing coverage lines may or may not respond to AI-related claims, and provides practical guidance for businesses navigating this rapidly developing landscape.

The Risks AI Creates for Business Policyholders

AI-related liability can arise from many directions, and the range of potential claims is expanding as AI becomes more deeply embedded in business operations. A sampling of contexts in which AI use (and liability) intersect with business insurance follows.

Errors and omissions. AI tools used in professional service delivery like legal research tools, medical diagnostic systems, financial advisory platforms, and engineering design software can produce inaccurate or misleading output. If a business relies on AI-generated analysis and that analysis proves wrong, the resulting harm may give rise to professional negligence claims against the business.

Discrimination and bias. AI systems used in hiring, lending, pricing, or other decision-making processes can produce outcomes that disproportionately disadvantage protected groups even when the bias was unintentional and the business was unaware of it.

Intellectual property infringement. AI systems trained on third-party data or content may generate output that infringes copyrights, trademarks, or trade secrets. A business that uses AI-generated marketing content, software code, or product designs faces the risk of infringement claims.

Privacy and data security. AI systems that process personal data (whether for customer analytics, behavioral targeting, or automated decision-making) raise data privacy concerns under an increasingly complex web of federal and state regulations. A breach or misuse of data by an AI system can trigger notification obligations, regulatory investigations, and civil claims.

Bodily injury and property damage. In manufacturing, transportation, healthcare, and other industries, AI systems that control physical processes or devices can cause bodily injury or property damage if they malfunction or make incorrect decisions. An autonomous system that directs a piece of equipment to operate unsafely, or a medical AI that recommends incorrect treatment, can produce devastating consequences.

Product liability. When AI is embedded in a product sold or distributed to end users, defects in the AI’s design, training data, or decision-making logic may support product liability claims. This is particularly relevant for consumer-facing products that incorporate AI-driven recommendations, safety features, or autonomous functionality.

Reputational harm. AI-generated content can contain defamatory statements, inaccurate claims about competitors, or misleading information about the business’s own products and services.

How Existing Coverage Lines May Respond

The challenge with AI-related claims is that they do not fit neatly into any single coverage category. Instead, they may implicate multiple policies or, worse, they may fall into the gaps between them, leaving the business exposed.

Commercial General Liability (CGL)

CGL policies cover third-party claims for bodily injury and property damage caused by an occurrence (an accident). They also cover personal and advertising injury, which includes certain enumerated offenses such as defamation and copyright infringement in advertising. An AI-related claim could potentially trigger CGL coverage if it involves bodily injury caused by an AI-controlled process, property damage resulting from an AI malfunction, or advertising injury from AI-generated content. However, CGL policies were not written with AI in mind, and insurers may contest whether AI-related injuries constitute “occurrences” or whether AI-generated content falls within the policy’s definition of “advertising.”

Professional Liability / E&O

Professional liability policies cover claims arising from errors or omissions in the performance of professional services. If a business uses AI in delivering professional services and the AI produces an error that harms a client, the professional liability policy may respond. However, the policy’s coverage is typically limited to the professional services described in the policy, and insurers may argue that reliance on AI output is not a “professional service” or that the insured failed to exercise the professional judgment the policy contemplates.

Cyber / Data Breach Insurance

Cyber policies cover data breaches, privacy liability, network security incidents, and cyber extortion. AI-related data privacy claims like the unauthorized collection or processing of personal data by an AI system may trigger cyber coverage. However, cyber policies are evolving rapidly, and some are beginning to include exclusions for losses arising from AI-specific risks, such as algorithmic bias or AI model manipulation.

Directors and Officers (D&O) Insurance

D&O policies cover claims against directors and officers for wrongful acts in their capacity as corporate leaders. AI governance failures such as inadequate oversight of AI deployment, failure to address known bias in AI systems, or misleading disclosures about AI capabilities could give rise to shareholder derivative suits or regulatory actions that implicate D&O coverage. The growing trend of holding individual executives accountable for cybersecurity and technology failures (exemplified by recent SEC enforcement actions involving Chief Information Security Officers) suggests that Al governance will increasingly be viewed as a board-level responsibility.

Employment Practices Liability (EPLI)

EPLI policies cover claims of discrimination, harassment, wrongful termination, and other employment-related torts. AI-related hiring, promotion, or compensation decisions that produce discriminatory outcomes could trigger EPLI coverage. However, EPLI policies vary in their treatment of technology-driven discrimination, and some may not clearly address liability arising from automated decision-making.

The Gaps and What to Do About Them

The reality is that many existing insurance policies were not designed for AI risk, and the policy language has not caught up with the technology. What’s more, underwriters are springing into action as AI usage, and the potential risks associated with it, climbs. Their objective is often to create endorsements (in a variety of insurance lines and products) that attempt to make clear there is no coverage for, or only limited coverage for, the use of AI. There will undoubtedly be costly coverage disputes and litigation over the interpretation and scope of the language these underwriters are crafting. This backdrop creates significant uncertainty for policyholders that can be expected to translate into substantial time and expense being expended by policyholders on ensuring that they are insured when it comes to AI. While time and expense are at stake in this effort, that investment may well pale in comparison to the costs associated with ignoring this critical exercise.

Policyholders can undertake the following efforts now in an effort to avoid, or at least mitigate, the chances that they find themselves in the unfortunate position where a substantial loss arising out of the use of AI, whether by the business or its vendors, is uncovered or only limited coverage is available for the loss.

  1. Review Your Policies Through an AI Lens

If your business uses AI, review each policy in your insurance program and ask: does this policy cover a claim arising from the AI systems we use? Pay particular attention to definitions of “professional services,” “occurrence,” “wrongful act,” and “advertising injury.” Look for AI-specific exclusions, which are beginning to appear in newer policy forms. Coordinate with your broker and coverage counsel to help identify and discuss the gaps.

  1. Evaluate Whether You Need Specialized AI Coverage

A growing number of insurers are developing AI-specific coverage products. These are policies designed to address the unique risks of AI deployment, including errors in AI output, algorithmic bias, and AI-related intellectual property claims. These products are in their early stages, but they may fill gaps that traditional coverage lines do not address.

  1. Document Your AI Governance Practices

Insurers are increasingly asking about AI usage in underwriting questionnaires. Businesses that can demonstrate robust AI governance, including policies for AI procurement and deployment, human oversight of AI decision-making, regular auditing of AI systems for bias and accuracy, and documented incident response procedures will be in a stronger position to obtain coverage and possibly to press for coverage if a claim is denied.

  1. Coordinate Insurance with Your AI Risk Management Strategy

Insurance is one component of a broader AI risk management strategy that should also include contractual protections (vendor agreements with AI providers should include indemnification and insurance requirements), technical safeguards (testing, monitoring, and human-in-the-loop controls), compliance programs (data privacy, anti-discrimination, and industry-specific regulations), as well as employee training on how to avoid compromising legal privileges. Insurance can fill the gaps that other risk management tools like those mentioned here cannot eliminate, but only if the insurance program is designed with AI risk in mind.

AI is transforming how businesses operate, but it is also transforming the risk landscape. The insurance industry is still catching up, and businesses that fail to proactively address AI-related coverage gaps may find themselves exposed when a claim arises. Do not wait for a claim to discover if an insurance program adequately addresses AI risk. By reviewing existing policies, exploring specialized coverage options, strengthening AI governance practices, and working with knowledgeable insurance and legal counsel, businesses can position themselves to benefit from AI’s potential while managing its risks responsibly.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More