ARTICLE
25 November 2024

State-Specific Legislation Adds Layer Of Complexity To Use Of AI In Health Care

WE
Wilson Elser Moskowitz Edelman & Dicker LLP

Contributor

More than 800 attorneys strong, Wilson Elser serves clients of all sizes across multiple industries. It maintains 38 domestic offices, another in London and enjoys more extensive international reach as a founding member of Legalign Global.  The firm is currently ranked 56th in the National Law Journal’s NLJ 500.
Noelle K. Sheehan (Partner-Orlando, Miami, Sarasota, West Palm Beach, FL) authored "State-Specific Legislation Adds Layer of Complexity to Use of AI in Health Care," an article published on November 21
United States Technology

Artificial intelligence (AI) is hardly new in the practice of health care. Indeed, medical facilities and practitioners were among AI's "first movers" 50 years ago and remain at its leading edge.

The relatively recent advent of generative AI has further accelerated the incursion of machine learning into medicine at hockey-stick growth rates unlikely to abate any time soon. The related legal liabilities are endless – and endlessly complex. Congress's American Privacy Rights Act (APRA), currently available as a "discussion draft" and garnering bipartisan support, is designed to offer some level of protection. Section 1557 in particular has implications for the development and deployment of AI systems in the health care sector.

Apart from the APRA, AI systems must comply with other national and international laws and regulations already in force, such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation (GDPR) in the European Union.

State-Specific Initiatives

States are following suit, taking a variety of legislative approaches tailored to local needs and concerns. While a few states have already enacted specific laws, others are in the process of developing comprehensive frameworks to ensure AI is used ethically and safely in health care settings. Collectively, they add a sizeable layer of regulatory complexity for local practitioners as well as regional and especially national health care organizations.

Some notable examples:

  • California. Known for its proactive stance on AI regulation, California has bills enacted or under consideration that address AI in various contexts. Notably, on October 1, 2024, Governor Newsom signed into law California's Artificial Intelligence Accountability Act, requiring state agencies to evaluate the risks and benefits of generative AI and mandating transparency in its use in state communications.
  • Georgia. In 2023, Georgia enacted House Bill 203 to regulate AI in optometric care. The law specifies that AI-based eye assessments cannot be the sole basis for issuing prescriptions and must be supplemented by a recent eye examination by a qualified professional.
  • Massachusetts. Also in 2023, Massachusetts introduced legislation aimed at regulating the use of AI in mental health treatment. Specifically, proposed bills such as House Bill 1974 and similar initiatives require mental health professionals to seek approval from licensing boards before integrating AI systems into treatment practices.
  • Pennsylvania. The Pennsylvania House of Representatives is currently advancing legislation aimed at improving transparency in how insurance companies use artificial intelligence. House Bill 1663 focuses on ensuring the ethical use of AI in health insurance claims processes. The bill requires health insurers to disclose when AI algorithms are used in claims evaluations and mandates that such algorithms be subjected to review and certification by the Pennsylvania Department of Insurance
  • Vermont: Vermont has established a Division of Artificial Intelligence within its State Agency of Digital Services, which is tasked with inventorying AI systems used by the state and proposing a code of ethics to address potential adverse impacts on residents.

Potential Liabilities

Legislators at the national and state levels have their work cut out for them as potential liabilities confronting health care facilities and individual practitioners grow in lockstep with AI technologies. Related risks may be broadly categorized as follows.

  • Medical Malpractice. While standards of care are quickly evolving, it is nearly impossible to keep up with the advances in artificial intelligence. To the extent that AI systems are involved in treating and diagnosing conditions in patients, health care providers may be held liable for less-than-satisfactory care and outcomes. Human health care providers using AI tools are responsible for understanding and correctly using the technology. Misuse or over-reliance on AI can lead to malpractice claims.
  • Bias and Discrimination. AI systems can perpetuate or even amplify algorithmic biases present in training data, leading to discriminatory practices and resulting lawsuits. Should these systems disproportionately impact certain groups, they could be subject to legal challenges based on equal protection laws.
  • Product Liability. Manufacturers of AI systems may be liable if flaws inherent in the systems' design cause harm. They also may be held accountable for damages if they have not adequately warned users of the systems' limitations or potential risks.
  • Informed Consent. Patients should be informed when AI is used in their care and made to understand the potential risks and benefits. Failure to obtain proper informed consent may lead to legal issues. Patients have the right to make autonomous decisions about their care, and inadequate disclosure about AI involvement could infringe on that right.
  • Data Privacy and Security. AI systems processing patient data must comply with HIPAA in the United States. Breaches or mishandling of patient data can result in legal action. In the European Union, AI systems must adhere to the GDPR. Noncompliance can lead to significant fines and legal challenges.

When considering how best to avoid liabilities and lawsuits, conformance to APRA section 1557 is a great place to start. The underlying provisions address algorithmic bias, fairness in AI, data privacy and security, accountability and transparency, and equitable access to AI technologies, among other AI-related concerns. And while state-specific legislation similarly hews to the themes of transparency, accountability, and nondiscrimination, there are important variations and nuances that should be thoroughly understood and ultimately inform the development of risk management protocols.

Conclusion

AI's use in health care is expanding exponentially as are the attendant risks. National and state legislators are at various stages of protecting their constituencies through new laws and regulations. Some of these are consistent and mutually reinforcing, others less so. Those in the insurance industry who service health care facilities and practitioners need to understand and anticipate the changing landscape. It's important that they encourage and help their clients to take immediate and constructive action now, thus mitigating financial and reputational harm in the future.

Originally published by PLUS

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More