Virginia's General Assembly recently passed House Bill 2094 (HB2094), a landmark piece of legislation aimed at regulating the development, deployment, and use of high-risk artificial intelligence (AI) systems. This bill, which would have gone into effect on July 1, 2026, would have introduced stringent requirements and civil penalties for noncompliance, enforced by the Attorney General. However, Virginia Governor Glenn Youngkin vetoed the bill on March 24, 2024, stating that the framework it sought to establish was "burdensome." Governor Youngkin highlighted his concern that the bill "stifles progress and places onerous burdens on" Virginia businesses. He further suggested that the bill would harm innovation and that he supported "responsible governance of artificial intelligence (AI)," which he believes has already occurred through his Executive Order 30 and through a state-level AI-task force.
If enacted, the Virginia bill would have applied certain requirements to "developers" and "deployers" of "high-risk artificial intelligence systems" aimed at preventing "algorithmic discrimination" in connection with "consequential decisions" made by AI systems. Developers—those that develop or modify high-risk AI systems—would have been required to make disclosures about the risks of algorithmic discrimination. Deployers—those that use a high-risk AI system to make a consequential decision—would have been required to: (i) implement risk management efforts; (ii) conduct impact assessments; and (iii) make certain customer disclosures. The bill contained several explicit exemptions for certain entities (including in the financial services, healthcare, and insurance context) and for certain purposes (such as responding to consumer requests or to comply with contractual obligations). The bill created no private right of action, and the Attorney General would have been empowered to enforce the statute against entities that do not exercise an adequate duty of care to reduce risks.
The vetoed Virginia law in similar respects was much like the Colorado AI Act. Enacted in May 2024, Colorado's bill also regulates high-risk AI and contains similar transparency and disclosure requirements. The Colorado and Virginia AI laws are less burdensome than the EU AI Act, which has far more extensive regulatory and compliance requirements. Nonetheless, these various laws require risk assessments and transparency obligations that provide Developers and Deployers with a "rebuttable presumption" that they have exercised the duty of care required by the statute.
Governor Youngkin's veto might be a sign of an ongoing trend towards deregulation at the state level. Even when the Colorado AI Act (CAIA) was signed into law last year, Colorado Governor Jared Polis expressed "reservations." He noted that the bill deviated by regulating AI outputs, rather than discriminatory intent, and he was "concerned about the impact [CAIA] may have on an industry that is fueling critical technological advancements." He specifically singled out the "significant, affirmative reporting requirements."
Furthermore, Governor Polis showed openness to federal preemption, worried about state-level patchwork regulation. To that end, he encouraged the Colorado General Assembly to work with AI stakeholders to make legislative amendments to the law reflecting "evidence-based findings and recommendations for the regulation of" the AI industry, signaling a potential walk-back of some CAIA provisions, such as the disclosure provisions, though this ultimately relies on action from the Colorado legislature. Meanwhile, even though the requirements of the Colorado AI law are slated to take effect on February 1, 2026, Colorado's AI Impact Task Force proposed significant revisions to the law, and Colorado legislators have already proposed alternatives.
Conclusion
Virginia's new AI bill HB2094 would have been the latest in a state-wide shift in the regulatory landscape for artificial intelligence. However, this trend now faces headwinds as state executives and legislators consider the possible effects of overregulation. As businesses navigate these regulatory debates, it is crucial to stay informed and proactive in compliance efforts. Our team is here to help you understand and implement the necessary measures to ensure your AI systems comply with the evolving regulatory landscape over AI, whether in the context of the EU, state laws such as those in Colorado, and others.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.