ARTICLE
8 July 2025

Texas Enacts Sweeping AI Law: Disclosure, Consent, And Compliance Requirements Take Effect In 2026

SM
Sheppard Mullin Richter & Hampton

Contributor

Sheppard Mullin is a full service Global 100 firm with over 1,000 attorneys in 16 offices located in the United States, Europe and Asia. Since 1927, companies have turned to Sheppard Mullin to handle corporate and technology matters, high stakes litigation and complex financial transactions. In the US, the firm’s clients include more than half of the Fortune 100.
On June 22, 2025, Texas Governor Greg Abbott signed into law House Bill 149, enacting the Texas Responsible Artificial Intelligence Governance Act (TRAIGA).
United States Texas Technology

Listen to this post

On June 22, 2025, Texas Governor Greg Abbott signed into law House Bill 149, enacting the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). The law establishes one of the nation's most comprehensive state-level artificial intelligence regulatory frameworks. TRAIGA imposes disclosure, consent, and compliance requirements on developers, deployers, and governmental entities who use artificial intelligence systems (AI). The law is set to take effect on January 1, 2026.

Notably, TRAIGA defines an "artificial intelligence system" as any machine-based system that uses inputs to generate content, decisions, predictions, or recommendations that can influence physical or virtual environments. The law aims to advance the responsible development and use of AI while protecting individuals from foreseeable risks through structured oversight, disclosure requirements, and new programs such as a regulatory sandbox.

Key provisions of TRAIGA include:

  • Consumer Protection. The law prohibits the use of AI models that intentionally discriminate against protected classes, infringe on constitutional rights, or incite harm. Further, governmental entities are prohibited from using AI to identify individuals through biometric data without informed consent or to assign social scores based on behavior or personal characteristics.
  • Disclosure Guidelines. Any governmental or commercial entity that deploys an AI system intended to interact with consumers must provide a clear and conspicuous disclosure in plain language. The disclosure must be made before or at the time of interaction and cannot use deceptive interface designs known as "dark patterns."
  • AI Regulatory Sandbox Program. Subject to approval by the Department of Information Resources and any applicable agencies, a person may test an AI program in a controlled environment without being licensed under Texas laws. During this testing period, the attorney general may not file or pursue charges against a program participant for a violation that occurs during this period.
  • Safe Harbors. Entities that substantially comply with recognized risk management frameworks, such as the NIST AI Risk Management Framework, or who detect violations through internal audits or adversarial testing may qualify for protection against enforcement.
  • Enforcement and Civil Penalties. The Texas Attorney General holds exclusive enforcement authority. Civil penalties range from $10,000 to $200,000 per violation, with daily penalties for continued noncompliance.

Putting It Into Practice: With TRAIGA, Texas becomes the second state to adopt a comprehensive AI regulatory framework, joining Colorado, which enacted its AI law in 2024 (previously discussed here). As states continue to take divergent approaches to AI regulation, such as the recent veto of Virginia's AI bill by the governor (previously discussed here), market participants should closely monitor the evolving patchwork of state-level regulations to assess compliance obligations, adjust risk management strategies, and plan for operational impacts across jurisdictions.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More