Trending Up: Liability by Algorithm
Lawmakers and courts are shifting from theory to rulemaking as AI takes the wheel, bringing new questions about fault along for the ride. Recent legislation in the EU and UK places a presumptive share of liability on manufacturers and software providers, treating autonomous decision-making as an extension of product performance rather than driver behavior.
In the U.S., emerging state-level laws and agency guidance are starting to follow this pattern, especially where human oversight is limited. This trend marks a subtle but significant shift in the legal guardrails around AI: companies are no longer just expected to disclose risks—they're being required to absorb them.
Trending Down: Compliance Guesswork
With AI regulations maturing at the state and international levels, the era of legal ambiguity may be coming to an end. It wasn't too long ago that companies could lean on the absence of formal rules as a rationale for flexible, innovation-first approaches. But in 2025, that strategy is less tenable. Respondents to our January survey flagged compliance with jurisdiction-specific regulations as their second-biggest concern. New legal requirements in the EU, China, and more than two dozen U.S. states demand active engagement from legal and compliance teams.
From mandatory bias audits and transparency disclosures to evolving definitions of "high-risk" systems, the legal expectations are no longer a mystery—they're a moving target. And while enforcement may lag in some regions, regulators and plaintiffs alike are already treating these standards as de facto benchmarks.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.