State lawmakers continue to move faster than Congress on regulating artificial intelligence (AI). Recent activity in Texas and Illinois adds three distinct compliance regimes—including two new laws in Texas, one general and one healthcare-specific—while Colorado has postponed—but not watered down—its own AI Act. At the same time, a proposed federal moratorium that would have paused new state AI laws failed in the final version of the One Big Beautiful Bill Act.
Texas – AI Laws: Texas Responsible AI Governance Act (TRAIGA) and Electronic Health Record Requirements Act
Texas has enacted two major AI laws: a broad commercial AI law (TRAIGA) and a healthcare-specific law governing AI in electronic health records.
TRAIGA (General AI Law)
Who is Covered
TRAIGA applies to any private entity that: (1) promotes, advertises, or conducts business in Texas; (2) produces a product or service used by Texas residents; or (3) develops or deploys an AI system in Texas. TRAIGA expressly covers both developers (those who build or offer an AI system) and deployers (those who put an AI system into use).
Scope & Key Duties
- Prohibits developing or deploying AI systems with the intent to unlawfully discriminate, manipulate human behavior, infringe constitutional rights, or create/distribute certain explicit content or deepfakes.
- Requires document production and information sharing if the Attorney General (AG) receives a consumer complaint.
- Establishes an "innovation sandbox" program for limited regulatory waivers during AI testing.
Enforcement & Penalties
- Exclusive enforcement by the Texas AG, who must provide written notice and 60 days to cure most violations.
- Civil penalties:
- $10,000–$12,000 per uncured curable violation
- $80,000–$200,000 per uncurable violation
- $2,000–$40,000 per day for ongoing violations after the cure period
- State agencies may suspend, revoke, or fine up to $100,000 for entities licensed by such agency if recommended by the AG.
Effective Date
January 1, 2026.
Next Steps
Companies designing or offering AI systems that touch Texas should begin mapping development and deployment practices to TRAIGA's requirements now. Although TRAIGA applies only to "intentional" development or deployment of prohibited AI systems, the language of the statute provides that it should be broadly construed and applied to promote its purposes, including to protect individuals from risks associated with AI.
Texas Electronic Health Record Requirements Act (Healthcare-Specific AI Law)
Who is Covered
Most healthcare providers, health insurers, and a broad range of businesses and organizations that collect, maintain, or store electronic health records for Texas residents, including healthcare practitioners, payers, and vendors handling health information. Certain long-term care and community-based providers are excluded.
Scope & Key Duties
Among other requirements, the law:
- Requires all electronic health records containing patient information to be physically stored in the U.S. or a U.S. territory (effective for all records as of January 1, 2026).
- Permits healthcare practitioners to use AI for diagnostic purposes, including recommendations on diagnosis or treatment, provided the practitioner is licensed, reviews all AI-generated records in accordance with Texas Medical Board standards, and discloses AI use to patients.
- Mandates specific documentation of observed biological sex at birth and sexual development disorders in health records and requires that any AI decision-support tools in health records account for this information.
Enforcement & Penalties
- Investigations by the Texas Medical Board, Department of Insurance, Department of Licensing and Regulation, and other regulatory agencies.
- Civil penalties up to $5,000 per negligent violation per year, $25,000 per knowing or intentional violation per year, and $250,000 per violation involving knowing or intentional use of protected health information for financial gain.
- Disciplinary action, including suspension or revocation of licenses, for three or more violations.
Effective Date
September 1, 2025 (for new records); storage requirements apply to all records as of January 1, 2026.
Next Steps
Companies should ensure that:
- Their electronic health records are stored in the U.S. for any new records.
- Any use of AI for diagnostic purposes is disclosed to patients.
- Their AI-based decision support tool vendors are aware of and are compliant with this law.
Illinois – Wellness and Oversight for Psychological Resources Act
Who is Covered
- Tele-mental-health platforms, technology developers, healthcare providers, employers offering digital "therapy" or wellness tools, and any organization marketing services in Illinois that may be deemed "therapy" under Illinois law.
- Only licensed professionals—including Illinois-licensed clinical psychologists, social workers, professional counselors, physicians, and advanced practice psychiatric nurses—may provide "therapy or psychotherapy services."
Scope & Key Duties
- Prohibits the use of AI to provide mental
health and therapeutic decision-making, including:
- Making independent therapeutic decisions.
- Directly engaging with clients in any form of therapeutic communication.
- Generating therapeutic recommendations or treatment plans without review and approval by the licensed professional.
- Detecting emotions or mental states.
- AI may be used solely for administrative or supplementary support (such as scheduling, billing, record-keeping, or analyzing anonymized data), but the licensed professional maintains full responsibility for the output of the AI.
- Written informed consent is required when AI tools are used to analyze recordings or transcripts of sessions and provide supplementary support in therapy or psychotherapy, and the patient must be informed in writing of the use and specific purpose of the AI tool.
- The law does not apply to peer support or self-help materials and educational resources that are available to the public and do not purport to offer therapy or psychotherapy services.
- The Illinois Act contains a confidentiality provision requiring all records and communications between individuals seeking therapy and licensed professionals to be kept confidential, with disclosure only as required under the Illinois Mental Health and Developmental Disabilities Confidentiality Act.
Enforcement & Penalties
- The Illinois Department of Financial and Professional Regulation has express authority to investigate actual, alleged, or suspected violations.
- Civil penalties up to $10,000 per violation may be assessed, with no cure period.
- Penalties are determined based on the degree of harm and the circumstances of the violation.
Effective Date
August 4, 2025 (immediate).
Next Steps
Providers in the mental health space in Illinois should:
- Audit any AI tools, including chatbots, recommendation engines, or analytics used in Illinois mental-health offerings and disable or wall-off functionality that crosses into "therapeutic communication."
- Ensure compliance with the new consent and disclosure requirements.
- Review confidentiality practices.
Colorado – AI Act: Compliance Date Extended
During Colorado's August special session, following pressure from industry leaders and the Trump administration, lawmakers adopted an amendment shifting the AI Act's operative date from February 1, 2026, to June 30, 2026. All substantive obligations—risk-management programs, impact assessments, public disclosures, and "reasonable care" to prevent algorithmic discrimination—remain unchanged. The four-month delay offers additional time to finalize compliance frameworks, and the Colorado legislature may further amend the law.
Federal Outlook – Moratorium Rejected, But Pressure on States Remains
A last-minute Senate proposal to preempt state AI legislation for 10 years was removed from the One Big Beautiful Bill Act this summer after bipartisan opposition. With no federal pre-emption on the horizon, the patchwork of state laws will continue to expand—making early, jurisdiction-specific compliance planning essential.
However, the Trump administration issued "Winning the Race: America's AI Action Plan" in July 2025. The AI Action Plan provides that the "Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds." States may pass less restrictive AI regulations in response to the threat of reduced funding for AI initiatives.
Conclusion
Given the rapidly evolving patchwork of state AI laws, businesses should proactively review their AI products, services, and compliance programs to identify where new obligations may apply. Consider conducting a risk assessment, updating internal policies, and training relevant teams on new requirements in Texas, Illinois, and Colorado. Early engagement with legal and compliance professionals can help you adapt to these changes, minimize enforcement risk, and position your organization for responsible AI innovation.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.