- within Technology topic(s)
- in United States
- within Technology, Government, Public Sector, Litigation and Mediation & Arbitration topic(s)
Sometimes the most revealing AI regulations aren't the ones that say "you must" — they're the ones that say "you must not."
We often focus on the rules for developing, deploying, and procuring AI. But what may be more telling is where the rules stop entirely. Not the "how-to" of compliance, but the "you must not" of prohibition. These hard lines, where legislators draw boundaries around algorithmic authority, reveal an emerging consensus about where algorithmic decision-making creates unacceptable risks.
The EU's Forbidden Zone: Where Algorithms Fear to Tread
Article 5 of the EU AI Act (enforceable since February 2025) bans AI practices presenting "unacceptable risk," regardless of safeguards or oversight. These are not regulatory speed bumps; rather, they are solid walls. These bans generally target manipulative or surveillance-heavy AI:
- Article 5(1)(a): Prohibits AI systems deploying subliminal techniques (e.g., app nudges) beyond a person's consciousness or purposefully manipulative or deceptive techniques to materially distort behavior, appreciably impairing the person's ability to make an informed decision and causing them to make a decision they would not have otherwise made, resulting in or likely resulting in physical or psychological harm. Translation: No sneaky AI nudging you into decisions you wouldn't normally make.
- Article 5(1)(b): Bans systems exploiting vulnerabilities of specific groups (e.g., age, disability) to distort behavior, causing or likely causing harm.
- Article 5(1)(c): Prohibits social scoring by public authorities evaluating/classifying individuals based on behavior or characteristics, leading to detrimental treatment.
- Article 5(1)(h): Restricts real-time remote biometric identification in public spaces for law enforcement, with exceptions for serious crimes, missing persons, or imminent threats.
These prohibitions share a common thread: they challenge human autonomy by bypassing deliberation (subliminal tactics, vulnerability exploitation) or enabling comprehensive surveillance (social scoring, biometric ID).
The American Patchwork: When Algorithms Can't Make the Call
US jurisdictions target algorithmic decision-making in employment with specific restrictions:
- New York City Local Law 144
(effective July 2023):
- Requires annual bias audits for automated employment decision tools (AEDTs), examining disparate impact by race/ethnicity and sex;
- Mandates notice to candidates/employees about AEDT use; and
- Requires publicly available audit results and data retention policy disclosure. Think of it this way: if your company's AI resume screener consistently filters out qualified candidates from certain ZIP codes (which can be a proxy for bias and discrimination), you'll need documentation showing you tested for — and addressed — this bias.
- Illinois Artificial Intelligence Video Interview
Act (effective January 2020):
- Requires notifying applicants about AI use in video interviews and its mechanics;
- Mandates consent before use; and
- Limits video sharing to evaluators and requires destruction within 30 days upon request.
- California Civil Rights Council
Regulations (effective October 1, 2025):
- Clarify that automated decision systems (ADS) violating existing FEHA anti-discrimination protections are unlawful;
- Extend recordkeeping requirements for ADS data to four years; and
- Note that anti-bias testing is relevant to discrimination defenses (but not mandated).
The pattern: transparency and accountability in AI-assisted hiring, not outright bans, with a focus on preventing opacity and disparate impact.
State-Level Comprehensive Frameworks
- Texas House Bill 149
(TRAIGA, effective January 1, 2026):
- Prohibits development or deployment of AI with intent to discriminate against protected classes; and
- Requires government entities to disclose to consumers when they interact with AI systems.
- Colorado SB 24-205
(Colorado AI Act, effective June 30, 2026, delayed from February
2026):
- Targets "high-risk" AI systems with impact assessments, risk management policies, and consumer notice requirements; and
- Requires developers and deployers to use reasonable care to prevent algorithmic discrimination.
- Other 2025 State-Level Developments:
- Utah: Amended its Artificial Intelligence Policy Act (effective May 2025) to narrow disclosure requirements, focusing on "high-risk" AI interactions in regulated occupations and establishing safe harbor provisions for compliant systems.
- Connecticut: SB 2, which would have mandated impact assessments for high-risk AI systems, passed the Senate but stalled in the House amid gubernatorial veto threats over innovation concerns.
- Virginia: HB 2094, which would have established comprehensive high-risk AI consumer protections, was vetoed by the Governor in March 2025 over concerns about stifling innovation. This development highlights ongoing legislative friction despite broad support for AI regulation.
Credit and Financial Services
Preexisting laws apply to AI-driven credit decisions:
- Fair Credit Reporting Act (FCRA, 15 U.S.C. § 1681 et seq.): Section 615 mandates adverse action notices when decisions are based on consumer reports. Combined with ECOA's specific reasons requirement, CFPB guidance (Circular 2023-03) emphasizes that complex algorithms must produce explainable adverse action reasons.
- Equal Credit Opportunity Act (ECOA, 15 U.S.C. § 1691 et seq.) and Regulation B (12 CFR Part 1002): Section 1002.9(b)(2) requires creditors to provide specific, actionable reasons for adverse decisions. CFPB Circulars 2022-03 and 2023-03 confirm "the algorithm" is not a valid reason.
The Housing Context
The Fair Housing Act (42 U.S.C. § 3601 et seq.) supports disparate impact liability on AI in tenant screening, mortgage underwriting, and property valuations per the 2015 Inclusive Communities Supreme Court decision. However, HUD's September 2025 withdrawal of disparate impact guidance — including the 2016 post-Inclusive Communities guidance and 2024 AI advertising guidance — signals a dramatic enforcement shift toward intentional discrimination claims only. While HUD has withdrawn its guidance and shifted enforcement priorities, the Fair Housing Act and Inclusive Communities precedent still stands — it's the enforcement approach, not the law, that has changed.
Healthcare and Insurance
While housing regulators grapple with enforcement priorities, the healthcare sector is charting a clearer path forward.
- Colorado SB 21-169: Requires certain insurers to establish governance frameworks and test external consumer data and AI systems for unfair discrimination based on protected classes.
- HIPAA Privacy Rule (45 CFR § 164.524): Guarantees individuals access to their protected health information, which may indirectly support review of data used in AI-driven healthcare decisions.
- Texas SB 1188 (effective September 2025): Requires healthcare practitioners to maintain human oversight of AI-generated medical decisions, disclose AI use to patients, and physically store electronic health records in the US.
What the Boundaries Reveal
These regulatory frameworks do not ban AI capability, but do generally establish boundaries requiring:
- Transparency: Disclosing use and explaining outcomes.
- Human Oversight: Preserving decision-making authority, not just involvement.
- Contestability: Enabling challenges/appeals of algorithmic decisions.
- Accountability: Mandating bias audits, impact assessments, and risk management.
Practical Governance Implications
For AI governance frameworks:
- Risk Classification: Map AI use cases against prohibited practices (e.g., social scoring) and high-risk categories (employment, credit).
- Human Oversight Architecture: Ensure humans have expertise and authority to evaluate/override AI (in accordance with Texas's "preserve authority" standard).
- Documentation: Conduct required assessments (e.g., NYC bias audits, Colorado discrimination assessments).
- Explainability: Meet FCRA/ECOA standards with specific, defensible reasons — not "the algorithm decided."
- Notice and Consent: Comply with specific notice obligations (e.g., Illinois video interviews, Colorado consumer notices).
The Compliance Question
Evaluate AI implementations by asking:
- Does this system make consequential decisions (employment, credit, housing, healthcare, benefits)? What specific requirements apply?
- Can a human evaluate and override AI reasoning?
- Could we defend an adverse decision to a regulator with specific reasons?
- Have we conducted required bias audits/impact assessments?
Looking Forward
As of October 2025, states like New York (AI companion safeguards) and California (finalized AI discrimination regs) add layers, while federal efforts (e.g., the AI Bill of Rights) lag. Successful organizations will be those that hardwire human agency and accountability into AI architecture, ensuring compliance with evolving laws. The boundaries are being drawn now — and crossing them, even inadvertently, could prove costly.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.