- Federal Court Limits HIPAA's Reach on Website
Tracking Data: In American Hospital Association v.
Xavier Becerra, a federal court struck down parts of the U.S. Department of
Health and Human Services' (HHS) guidance that extended
HIPAA's reach to third-party online tracking technologies on
public health websites. The court ruled that metadata, like IP
addresses, does not qualify as protected health information (PHI)
under HIPAA, especially when the website is unauthenticated and
without a clear connection to healthcare services. This decision
limited HHS's authority and highlighted the complexity of
applying HIPAA to modern technologies, causing confusion about
compliance for covered entities. Despite this ruling, companies
handling health data must still carefully monitor tracking
technologies, particularly for authenticated sites, and ensure
privacy practices align with ongoing regulatory updates.
- Free Speech and Data Protection Collide in
California: In August 2024, the U.S. Court of Appeals for
the Ninth Circuit partially upheld and partially vacated a
preliminary injunction against the California Age-Appropriate
Design Code Act (AADCA). The court ruled that the law's
requirement for businesses to submit data protection impact
assessments (DPIAs) likely violated the First Amendment by
compelling speech, but allowed other provisions of the law to
proceed for further review. Despite this ruling, businesses must
comply with the remaining provisions by March 2025, while
constitutional challenges to the law continue.
- Protecting Workers' Rights in the Age of AI
Surveillance: The U.S. Consumer Financial Protection
Bureau (CFPB) and the Department of Labor are addressing workplace surveillance concerns
after reports of companies using AI-driven monitoring technology
that collects personal and biometric data without employee consent.
The CFPB's new guidelines emphasize that under the Fair Credit
Reporting Act, organizations must provide transparency, employee
consent, and allow disputes over any inaccurate information
gathered through surveillance tools. This effort, backed by various
advocates, aims to protect workers' rights as technology
evolves, ensuring that monitoring practices do not infringe upon
employee privacy or create unfair working conditions.
- California Privacy Agency Strengthens Data Broker
Oversight with New Enforcement Sweep: The California
Privacy Protection Agency (CPPA) is ramping up its enforcement efforts, focusing on
data brokers' compliance with registration requirements under
the Delete Act. Taking over the broker registry on January 1, 2024,
the CPPA aims to enhance transparency and empower consumers, with
full provisions of the Act, including a deletion mechanism, set to
activate in 2026. Noncompliant brokers face daily penalties, and
the CPPA's ongoing sweep seeks to identify unregistered brokers
who met the criteria in 2023. Furthermore, the CPPA is expanding
its reach through collaborations with the FCC and international
regulatory bodies, emphasizing a comprehensive approach to privacy
enforcement.
- New York Department of Financial Services Issues
Guidance to AI-specific Risks: The New York Department of
Financial Services (NY DFS) recently issued guidance to help covered entities
address AI-specific cybersecurity risks, enhancing compliance with
existing requirements under the NY DFS Cybersecurity Regulation (23
NYCRR Part 500). Key AI-related threats include AI-enabled social
engineering, AI-enhanced cyberattacks, risks to nonpublic
information (NPI) due to large datasets, and vulnerabilities from
third-party dependencies. NY DFS recommends measures like robust
risk assessments, third-party management, multi-factor
authentication, continuous monitoring, and comprehensive employee
training to mitigate these risks. The guidance encourages entities
to stay proactive with cybersecurity strategies, while future AI
trends — such as curated databases, data quality competition,
and new assessment protocols — offer emerging opportunities
and challenges for legal departments to support AI readiness.
- 8 New State Privacy Laws in 2025: Eight new
state privacy laws are coming into effect in 2025. U.S. businesses
will face an increasingly complex landscape of state-specific
compliance requirements. Starting in January, Iowa, Delaware,
Nebraska, New Hampshire, and New Jersey will implement privacy
regulations, followed by Tennessee, Minnesota, and Maryland later
in the year. Each state's law grants consumers rights to opt
out of targeted advertising, data sales, and profiling, while
imposing varying obligations on businesses, such as honoring
universal opt-out mechanisms and ensuring robust data protection
measures. Maryland's law stands out for its stringent
restrictions on data collection and advertising to minors,
signaling potentially tougher compliance demands across
states.
- What a Trump Administration Could Mean for Privacy, Cybersecurity, and Innovation: With the upcoming Trump administration, tech litigators anticipate changes in federal regulatory focus on AI, cybersecurity, and data privacy. Federal privacy investigations may decrease, but state-driven litigation and private lawsuits are expected to remain active, especially in privacy and data protection. AI regulation may see a hands-off approach, allowing innovation to flourish, though states could step in with stricter measures if federal oversight loosens. National security concerns may drive export controls on AI, while regulatory scrutiny on AI use in areas like hiring, child protection, and defense is likely to increase, with a focus on balancing technological growth with targeted regulation.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.