ARTICLE
27 March 2025

Virginia Moves To Regulate High-Risk AI With New Compliance Mandates

SM
Sheppard Mullin Richter & Hampton

Contributor

Sheppard Mullin is a full service Global 100 firm with over 1,000 attorneys in 16 offices located in the United States, Europe and Asia. Since 1927, companies have turned to Sheppard Mullin to handle corporate and technology matters, high stakes litigation and complex financial transactions. In the US, the firm’s clients include more than half of the Fortune 100.
On February 20, the Virginia General Assembly passed the High-Risk Artificial Intelligence Developer and Deployer Act. If signed into law, Virginia would become the second state...
United States California Connecticut Texas Virginia Technology

Listen to this post

On February 20, the Virginia General Assembly passed the High-Risk Artificial Intelligence Developer and Deployer Act. If signed into law, Virginia would become the second state, after Colorado, to enact comprehensive regulation of "high-risk" artificial intelligence systems used in critical consumer-facing contexts, such as employment, lending, housing, and insurance.

The bill aims to mitigate algorithmic discrimination and establishes obligations for both developers and deployers of high-risk AI systems.

  • Scope of Coverage. The Act applies to entities that develop or deploy high-risk AI systems used to make, or that are a "substantial factor" in making, consequential decisions affecting consumers. Covered contexts include education enrollment or opportunity, employment, healthcare services, housing, insurance, legal services, financial or lending services, and decisions involving parole, probation, or pretrial release.
  • Risk Management Requirements. AI deployers must implement risk mitigation programs, conduct impact assessments, and provide consumers with clear disclosures and explanation rights.
  • Developer Obligations. Developers must exercise "reasonable care" to protect against known or foreseeable risks of algorithmic discrimination and provide deployers with key system usage and limitation details.
  • Transparency and Accountability. Both developers and deployers must maintain records sufficient to demonstrate compliance. Developers must also publish a summary of the types of high-risk AI systems they have developed and the safeguards in place to manage risks of algorithmic discrimination.
  • Enforcement. The Act authorizes the Attorney General to enforce its provisions and seek civil penalties of up to $7,500 per violation.
  • Safe Harbor. The Act includes a safe harbor from enforcement for entities that adopt and implement a nationally or internationally recognized risk management framework that reasonably addresses the law's requirements.

So how does this compare to Colorado's law? Virginia defines "high-risk" more narrowly—limiting coverage to systems that are a "substantial factor" in making a consequential decision, whereas the Colorado law applies to systems that serve as a "substantial" or "sole" factor. Colorado's law also includes more prescriptive requirements around bias testing and impact assessment content, and provide broader exemptions for small businesses.

Putting It Into Practice: If enacted, the Virginia AI law will add to the growing patchwork of state-level AI regulations. In 2024, at least 45 states introduced AI-related bills, with 31 states enacting legislation or adopting resolutions. States such as California, Connecticut, and Texas have already enacted AI-related statutes . Given this trend, it is anticipated that additional states will introduce and enact comprehensive AI regulations in the near future.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More