ARTICLE
30 December 2025

Recent Developments In Artificial Intelligence And Privacy Legislation In New York State

JL
Jackson Lewis P.C.

Contributor

Focused on employment and labor law since 1958, Jackson Lewis P.C.’s 1,100+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged, stable and diverse, and share our clients’ goals to emphasize inclusivity and respect for the contribution of every employee.
New York State's 2025 legislative session marked a notable moment in the evolution of artificial intelligence (AI) and privacy regulation. Governor Kathy Hochul...
United States Technology
Eric J. Felsberg’s articles from Jackson Lewis P.C. are most popular:
  • in United States
  • with readers working within the Telecomms and Law Firm industries
Jackson Lewis P.C. are most popular:
  • within Criminal Law topic(s)

New York State's 2025 legislative session marked a notable moment in the evolution of artificial intelligence (AI) and privacy regulation. Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act, creating one of the first state-level frameworks aimed specifically at the most advanced AI systems, while vetoing the proposed New York Health Information Privacy Act (NYHIPA), a bill that would have significantly expanded health data protections beyond existing federal law. Together, these developments provide important signals for businesses operating in or touching New York.

The RAISE Act

The RAISE Act amends the General Business Law to impose transparency and risk-management obligations on developers of certain high-end AI systems. The law is narrowly focused on "frontier models," defined by extraordinarily high computational thresholds, generally models trained with more than 10²⁶ computational operations and over $100 million in compute costs.

For most businesses, this means the law will primarily affect developers and deployers of the most powerful AI systems rather than everyday enterprise automation tools.

Practical examples of AI technologies that could fall within scope include:

  • Large language models such as GPT-4-class, Claude-class, or Gemini-class systems trained at a massive scale;
  • Generative AI systems capable of producing highly realistic video or audio content, including synthetic voices or deepfake-quality media;
  • Advanced medical or scientific AI tools, such as models used to support diagnostics, drug discovery, or large-scale biological simulations that require substantial computational resources.

Covered "large developers" must implement and publish a safety and security protocol (with limited redactions), assess whether deployment poses an unreasonable risk of "critical harm," and report certain safety incidents to the New York Attorney General within 72 hours, in contrast to changes to data breach laws that took effect at the end of 2024.

While the law does not create a private right of action, enforcement authority rests with the Attorney General, including significant civil penalties for violations.

The RAISE Act takes effect January 1, 2027.

For businesses that license or integrate frontier AI models from third parties, the RAISE Act is also relevant contractually. Vendors may pass through compliance obligations, audit rights, or usage restrictions as part of their efforts to meet statutory requirements.

Health Information Privacy Act Vetoed

Although NYHIPA was vetoed, its contents remain highly relevant, particularly for businesses in health, wellness, advertising, and AI-enabled consumer services. The bill would have applied broadly to any entity processing health-related information linked to a New York resident or someone physically present in the state, regardless of HIPAA status. This would have been a more expansive law than similar state health data laws in Washington and Nevada.

Key provisions included strict limits on processing health data without express authorization, detailed and standalone consent requirements, and explicit bans on consent practices that obscure or manipulate user decision-making. The bill would have excluded research, development, and marketing from "internal business operations", meaning AI training or product improvement using health data could have required new authorization. Individuals would also have been granted robust access and deletion rights, including obligations to notify downstream service providers and third parties of deletion requests going back one year.

Takeaways for Businesses

Taken together, these developments reflect New York's intent to play a leading role in AI and privacy governance. For businesses, the message is not one of immediate across-the-board compliance, but of strategic preparation.

Companies developing or deploying advanced AI should strengthen governance, documentation, and incident-response processes. Organizations handling health-adjacent data, especially data that falls outside of HIPAA, should continue monitoring legislative activity and assess whether existing consent flows, data uses, and vendor arrangements would withstand a future version of NYHIPA or similar state laws.

New York's approach underscores a broader trend: even narrowly scoped laws can have a wide practical impact through contracts, product design, and risk management. Businesses that plan early will be best positioned as this regulatory landscape continues to evolve.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More