ARTICLE
14 May 2025

HIPAA Compliance For AI In Digital Health: What Privacy Officers Need To Know

FL
Foley & Lardner

Contributor

Foley & Lardner LLP looks beyond the law to focus on the constantly evolving demands facing our clients and their industries. With over 1,100 lawyers in 24 offices across the United States, Mexico, Europe and Asia, Foley approaches client service by first understanding our clients’ priorities, objectives and challenges. We work hard to understand our clients’ issues and forge long-term relationships with them to help achieve successful outcomes and solve their legal issues through practical business advice and cutting-edge legal insight. Our clients view us as trusted business advisors because we understand that great legal service is only valuable if it is relevant, practical and beneficial to their businesses.
Artificial intelligence (AI) is rapidly reshaping the digital health sector, driving advances in patient engagement, diagnostics, and operational efficiency.
United States Food, Drugs, Healthcare, Life Sciences

Artificial intelligence (AI) is rapidly reshaping the digital health sector, driving advances in patient engagement, diagnostics, and operational efficiency. However, for Privacy Officers, AI's integration into digital health platforms raises critical concerns around compliance with the Health Insurance Portability and Accountability Act and its implementing regulations (HIPAA). As AI tools process vast amounts of protected health information (PHI), digital health companies must carefully navigate privacy, security, and regulatory obligations.

The HIPAA Framework and Digital Health AI

HIPAA sets national standards for safeguarding PHI. Digital health platforms—whether offering AI-driven telehealth, remote monitoring, or patient portals—are often HIPAA covered entities, business associates, or both. Accordingly, AI systems that process PHI must be able to do so in compliance with the HIPAA Privacy Rule and Security Rule, making it vital for Privacy Officers to understand:

  • Permissible Purposes: AI tools can only access, use, and disclose PHI as permitted by HIPAA. The introduction of AI does not change the traditional HIPAA rules on permissible uses and disclosures of PHI.
  • Minimum Necessary Standard: AI tools must be designed to access and use only the PHI strictly necessary for their purpose, even though AI models often seek comprehensive datasets to optimize performance.
  • De-identification: AI models frequently rely on de-identified data, but digital health companies must ensure that de-identification meets HIPAA's Safe Harbor or Expert Determination standards—and guard against re-identification risks when datasets are combined.
  • BAAs with AI Vendors: Any AI vendor processing PHI must be under a robust Business Associate Agreement (BAA) that outlines permissible data use and safeguards—such contractual terms will be key to digital health partnerships.

AI Privacy Challenges in Digital Health

AI's transformative capabilities introduce specific risks:

  • Generative AI Risks: Tools like chatbots or virtual assistants may collect PHI in ways that raise unauthorized disclosure concerns, especially if the tools were not designed to safeguard PHI in compliance with HIPAA.
  • Black Box Models: Digital health AI often lacks transparency, complicating audits and making it difficult for Privacy Officers to validate how PHI is used.
  • Bias and Health Equity: AI may perpetuate existing biases in health care data, leading to inequitable care—a growing compliance focus for regulators.

Actionable Best Practices

To stay compliant, Privacy Officers should:

  1. Conduct AI-Specific Risk Analyses: Tailor risk analyses to address AI's dynamic data flows, training processes, and access points.
  2. Enhance Vendor Oversight: Regularly audit AI vendors for HIPAA compliance and consider including AI-specific clauses in BAAs where appropriate.
  3. Build Transparency: Push for explainability in AI outputs and maintain detailed records of data handling and AI logic.
  4. Train Staff: Educate teams on which AI models may be used in the organization, as well as the privacy implications of AI, especially around generative tools and patient-facing technologies.
  5. Monitor Regulatory Trends: Track OCR guidance, FTC actions, and rapidly evolving state privacy laws relevant to AI in digital health.

Looking Ahead

As digital health innovation accelerates, regulators are signaling greater scrutiny of AI's role in health care privacy. While HIPAA's core rules remain unchanged, Privacy Officers should expect new guidance and evolving enforcement priorities. Proactively embedding privacy by design into AI solutions—and fostering a culture of continuous compliance—will position digital health companies to innovate responsibly while maintaining patient trust.

AI is a powerful enabler in digital health, but it amplifies privacy challenges. By aligning AI practices with HIPAA, conducting vigilant oversight, and anticipating regulatory developments, Privacy Officers can safeguard sensitive information and promote compliance and innovation in the next era of digital health. Health care data privacy continues to rapidly evolve, and thus HIPAA-regulated entities should closely monitor any new developments and continue to take necessary steps towards compliance. Please reach out to the authors, your Foley relationship partner, to our Artificial Intelligence Area of Focus and Health Care & Life Sciences Sector with any questions.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More