On June 22, 2025, the Texas governor signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA or the Act) into law, making Texas the second state to pass comprehensive artificial intelligence (AI) regulation (with Colorado). The Act, which places categorical limitations on the deployment and development of AI systems, will go into effect on January 1, 2026, exactly one month before the Colorado AI Act. Given this timeline and the civil penalties available under each statutory scheme, companies should evaluate their uses of AI to ensure compliance ahead of 2026.
TRAIGA outlines a set of prohibited practices related to AI, including use of AI to manipulate human behavior, assign a social score (by government entities), discriminate unlawfully, infringe on constitutional rights, and capture biometric data without consent. Notably, the Act includes provisions implementing a regulatory sandbox program meant to promote innovation as well as responsible deployments of AI. The Act also establishes the Texas Artificial Intelligence Council—a group of experts who will advise on the ethical, privacy, and public safety implications of deploying or developing AI systems in certain contexts or for particular uses. The Act aims to protect Texas consumers from the foreseeable risks associated with using AI systems and contains language promoting transparency, notice to consumers, and the responsible development and use of AI systems.
While TRAIGA takes inspiration from the Colorado AI Act and the EU AI Act, evidenced by the Act's definition of AI system, focus on transparency, and continuation of the "developer" and "deployer" distinctions, it is quite distinct from both. An earlier version of the bill shared many similarities with the Colorado AI Act's risk framework, centered on regulating "high-risk AI"; however, the final text reflects a simplified and much more targeted approach to AI regulation.
TRAIGA was passed in the shadow of a proposed federal 10-year moratorium on state AI laws within the One Big Beautiful Bill Act, which presented some uncertainty around the Act's future enforcement. The proposed ban was stripped from the bill in an early Senate vote on July 1. For now, the moratorium on state AI regulation is not moving forward, but this could change in the future as Congress continues to debate the issue.
In this post, we identify notable takeaways from TRAIGA and summarize its key provisions. We are happy to answer any questions you have about this law and its potential implications for your data privacy compliance program. To stay up to date on the latest health privacy law developments, please subscribe to the WilmerHale Privacy and Cybersecurity Law Blog.
KEY TAKEAWAYS
- TRAIGA is the second "comprehensive" state AI regulation to be passed, but it is much narrower than the Colorado AI Act.
- Many of TRAIGA's provisions only apply to state government entities.
- Certain conduct is expressly prohibited by TRAIGA (like in the EU AI Act).
- TRAIGA amends the Texas CUBI (Capture or Use of Biometric Identifier Act) by clarifying the meaning of consent when an individual's images are publicly available online.
KEY PROVISIONS OF TRAIGA
As noted above, TRAIGA outlines a set of prohibited uses of AI for covered entities (e.g., producers, promoters, developers, and deployers of AI systems) and governmental entities. Almost all the prohibitions include an intent requirement, further establishing the statute's aim to regulate purposeful behavior. The Act also amends certain provisions in Texas's biometric privacy law related to consent and expands exemptions to account for permissible uses of biometric identifiers in AI systems.
Applicability: TRAIGA applies to a person who:
- (1) promotes, advertises, or conducts business in this state;
- (2) produces a product or service used by residents of this state; or
- (3) develops or deploys an artificial intelligence system in this state.
Notable Definitions
- Artificial Intelligence System: TRAIGA defines this as "any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments."
- Consumer: TRAIGA defines this as "an individual who is a resident of this state acting only in an individual or household context. The term does not include an individual acting in a commercial or employment context."
- Developer: TRAIGA defines this as "a person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in this state."
- Deployer: TRAIGA defines this as "a person who deploys an artificial intelligence system for use in this state."
Prohibited Uses of AI: TRAIGA prohibits the following activities:
- Manipulation of Human Behavior:
TRAIGA prohibits a person from developing or deploying an AI system
in a manner that "intentionally aims to incite or
encourage" someone to:
- (1) commit physical self-harm, including suicide;
- (2) harm another person; or
- (3) engage in criminal activity.
- Social Scoring: Similar to Article 5
of the EU AI Act, TRAIGA prohibits a governmental entity from using
or deploying an AI system that evaluates or classifies a person or
a group of people based on social behavior or personal
characteristics, including characteristics that are inferred or
predicted, with the intent to calculate or assign a social score
(or similar categorical estimation) that is likely to result in:
- (1) detrimental or unfavorable treatment in a context unrelated to the one where the behavior was observed or characteristics were noted;
- (2) detrimental or unfavorable treatment that is disproportionate to the nature or gravity of the noted behavior or characteristics;
- (3) the infringement of a person's rights under the U.S. or Texas Constitution, state law, or federal law.
- Capture of Biometric Data: TRAIGA prohibits a governmental entity from developing or deploying an AI system that uses a person's biometric data or a collection of publicly available images or other media of a person to uniquely identify them without their consent, if the gathering would infringe on any right of the individual under the U.S. or Texas Constitution, state law, or federal law.
- Infringing on Constitutional Rights: TRAIGA prohibits a person from developing or deploying an AI system with the sole intent of infringing, restricting, or otherwise impairing an individual's rights guaranteed under the Constitution.
- Unlawful Discrimination: TRAIGA
prohibits a person from developing or deploying an AI system with
the intent to unlawfully discriminate against a protected class in
violation of state or federal law. Disparate impact alone is not
sufficient to demonstrate an intent to discriminate.
- Exemption: There is an exemption in this section for insurance entities that are subject to applicable statutes regulating unfair discrimination, unfair methods of competition, or unfair or deceptive acts or practices related to the insurance business. There is also an exemption for financial institutions that are in compliance with other federal and state banking laws.
- Certain Sexually Explicit Content: TRAIGA prohibits covered entities from (1) developing or distributing an AI system with the sole intent of producing, assisting or aiding in producing, or distributing (a) visual material in violation of the Texas Penal Code or (b) deep fake videos or images in violation of the Texas Penal Code; or (2) intentionally developing or distributing an AI system that involves sexually explicit content and minors.
Disclosure to Consumers: TRAIGA requires governmental agencies and healthcare services that make AI systems available for consumers to disclose when consumers are interacting with the AI system either before or at the time of the interaction. This requirement applies regardless of whether it would be obvious to a reasonable consumer that they are interacting with an AI system. The disclosure must:
- (1) be clear and conspicuous;
- (2) be written in plain language; and
- (3) may not use a dark pattern.
The explicit prohibition against dark patterns in disclosure reflects a growing trend in state privacy legislation toward recognizing the harms of manipulative design and manufactured consent.
Regulatory Sandbox: TRAIGA creates an AI regulatory sandbox program that enables businesses to obtain legal protection and limited access to the Texas market to test innovative AI systems without obtaining a license, registration, or other regulatory authorization. The idea for an AI regulatory sandbox largely stems from the EU AI Act. The purpose of the program is to promote the safe and responsible use of AI systems by providing clear guidelines for experimentation. The attorney general and state agencies are blocked from filing charges or pursuing punitive action against a program participant for violation of laws or regulations waived during the testing period in the sandbox.
Artificial Intelligence Council: TRAIGA also creates the Texas Artificial Intelligence Council, which consists of a group of experts whose job it will be to opine and advise on the regulatory sandbox program, the ethics of certain uses of AI systems, public safety issues, legal roadblocks hindering AI innovation, and other applicable topics. The council may also issue reports related to AI compliance, ethics, data privacy and security, and legal risks associated with the use of AI in Texas.
Amendments to Texas's Biometric Privacy Law: The opening pages of the Act include amendments to Texas's biometric privacy law. The amendment clarifies the meaning of consent in the context of biometric identifiers, stating that an individual has not provided consent for the capture or storage of a biometric identifier based solely on the existence of the image or other publicly available source that contains biometric identifiers unless the individual to whom the biometric identifiers relate made the image or other media publicly available. TRAIGA also adds new exceptions to Texas's CUBI, stating that the law does not apply to:
- (1) the training, processing, or storage of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering artificial intelligence models or systems, unless a system is used or deployed for the purpose of uniquely identifying a specific individual; or
- (2) the development or deployment of an artificial intelligence
model or system for the purposes of:
- (A) preventing, detecting, protecting against, or responding to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any other illegal activity;
- (B) preserving the integrity or security of a system; or
- (C) investigating, reporting, or prosecuting a person responsible for a security incident, identity theft, fraud, harassment, a malicious or deceptive activity, or any other illegal activity.
Enforcement
- State AG Enforcement: The Texas attorney general has exclusive authority to bring actions in response to violations of the Act and to obtain civil penalties and injunctive relief. There is no private right of action.
- Agency Enforcement: State agencies may also impose sanctions on entities licensed, registered, or certified by that agency for violations in certain circumstances.
- Cure Period: TRAIGA provides companies with a 60-day period to cure violations.
- Civil Penalties: TRAIGA allows the state attorney general to obtain civil penalties of up to $12,000 for curable violations and up to $200,000 for uncurable violations. Civil penalties may be recovered per violation.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.