ARTICLE
5 June 2023

AI/ML: Considerations Of Healthcare's New Frontier

SM
Sheppard Mullin Richter & Hampton

Contributor

Sheppard Mullin is a full service Global 100 firm with over 1,000 attorneys in 16 offices located in the United States, Europe and Asia. Since 1927, companies have turned to Sheppard Mullin to handle corporate and technology matters, high stakes litigation and complex financial transactions. In the US, the firm’s clients include more than half of the Fortune 100.
Artificial Intelligence (AI) and Machine Learning (ML) is bringing healthcare into a new frontier with vast potential to improve clinical outcomes, manage resources...
United States Technology

Artificial Intelligence (AI) and Machine Learning (ML) is bringing healthcare into a new frontier with vast potential to improve clinical outcomes, manage resources, and support therapeutic development. They also raise ethical, legal, and operational conundrums that can, in turn, amplify risk.

Where does AI and ML stand today? Go, stop, go.

2023 has brought a rollercoaster of activity marked by tremendous advancements and a reckoning with its implications, resulting in efforts to corral unchecked expansion. Many industry leaders called to pause continuing advancements for at least six months after seeing the warp-speed growth in AI technology, only to see others continue capitalizing on target-rich opportunities. This push-and-pull reflects the need to be thoughtful in AI/ML investment and use.

Activity at the governmental level is also rapidly evolving. In late 2022, The White House released a "Blueprint for an AI Bill of Rights" that guides the deployment, design, and use of automated systems, prioritizing civil rights and democratic values. On April 3, 2023, the FDA issued draft guidance to develop the agency's regulatory framework for AI/ML-enabled device software functions. This guidance proposes an approach to ensure the safety and efficacy of AI/ML that uses adaptive mechanisms to incorporate new data and improve in real-time. Given the lack of comprehensive federal legislation on AI, states have been active in developing privacy legislation. Additionally, to align on patient-centric, health-related AI standards, the Coalition for Health AI released a "Blueprint For Trustworthy AI Implementation Guidance and Assurance for Healthcare" in early April.

These accelerated developments have resulted in calls to action internationally. Italy temporarily banned ChatGPT in April and began an investigation into the application's suspected breach of the GDPR. Spain, Canada, and France have also raised similar concerns and launched investigations. EU lawmakers have called for an international summit and new AI rules, including to the proposed AI Act. Consequently, the implementation of AI/ML technology oversight and accountability practices is increasingly becoming a regulatory priority.

Key areas of AI growth

Legal and industry considerations

Although the goal of AI/ML technology is to offer "smarter" care, to date, the patient-provider relationship remains crucial in ensuring patients receive proper healthcare. AI's growth in healthcare and life sciences has also brought new legal and regulatory considerations, especially in the areas of:

  • FDA and SaMD: The use or assistance of AI algorithms in clinical decision-making may bring the technology within the purview of the FDA's regulatory authority if it meets the definition of a "medical device." The FDA has developed a framework to regulate AI/ML-enabled medical devices and AI/ML-based technologies which are "Software as a Medical Device." As the technology evolves and public interest grows, the FDA remains active in issuing guidance on these topics.
  • Ethics and research: As AI applications expand into the scope of services traditionally performed by licensed practitioners, questions into the unlicensed practice of medicine may be raised. The use of patient data in developing and testing AI technologies may also require informed consent and trigger IRB oversight. The need for human oversight, or the lack thereof, is likely to remain a continuing concern as AI proliferates, especially to monitor AI's ability to generate incorrect results and cause unnecessary or incorrect care. Additionally, the malicious and unintended applications of AI, such as in biohacking, bioweapons, and the weaponization of health information, mandate careful safeguarding and proactive vigilance by all to ensure proper oversight.
  • Intellectual property and data assets: Healthcare innovators in the AI/ML space face a different IP climate, as AI/ML systems may not receive the same protections as traditional output. Copyright and patents, for example, may not attach to output which is not a human author or developer's work. Rights in data assets, such as raw data and derivative data which underlay AI algorithms, also require monitoring.
  • Privacy and data rights: Healthcare privacy laws and regulations may be implicated at both the federal and state level. Patient information may be subject to protection under HIPAA and other state laws, and may need to be de-identified before such data can be shared and used to develop AI/ML products. Further, consumer privacy laws and private lawsuits related to data rights indicate a basis for individuals to monitor, and potentially object to, the use of their personal data in developing AI.
  • Reimbursement and coverage: The utilization and deployment of AI by healthcare providers and entities is largely dependent upon financial incentivization, including the rate of reimbursement based upon new AI iterations of an innovation and whether AI services will be covered by payers. As the industry moves towards value-based care, AI may offer additional tools and opportunities.
  • Potential biases and inaccuracies: Despite the groundbreaking and revolutionary potential of AI/ML technologies, AI-technology algorithms may detect patterns using human-annotated data, which could be (1) based on outdated, homogenous, or incomplete datasets and (2) susceptible to reproducing and perpetuating racial, sex-based, and even age-based biases. As a result, there is an increased focus on diversifying and expanding medical data sets to identify and mitigate these potential biases.

A pivotal moment

The dichotomy between the push forward in development of AI technologies coupled with calls to hit pause has brought AI/ML growth to a pivotal moment. As industry and governments reckon with the huge potential and risks of AI, it is paramount to track developments closely to ensure innovation is implemented in a manner which accelerates societal benefit while mitigating unintentional harms.

Although there is uncertainty and risk, the implementation of AI with the right compliance framework and infrastructure offers an exciting opportunity to transform healthcare into a new frontier with improved patient outcomes and increased efficiency.

Originally Published by MedCity News

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More