Key Takeaways:

  • This executive order (EO) directs federal agencies to review and develop policies to guide the use of artificial intelligence that touches every sector of the economy.
  • The EO directs the Department of Health and Human Services (HHS) to establish an HHS AI Task Force to develop a strategic plan on the responsible deployment of AI and AI-enabled technologies in healthcare settings.
  • The EO also directs HHS to develop a strategy for regulating the use of AI-enabled tools in the drug development process.
  • The Biden administration actions to direct government agencies on the development, use, and maintenance of AI will bring renewed focus and interest in the use of AI going forward.

On October 30, 2023, President Biden issued an executive order (EO) to guide federal agencies on the development and use of artificial intelligence (AI). The administration views AI as holding numerous benefits but at the same time cautions it could exacerbate societal harms if not responsibly managed.

The Biden administration laid out eight principles to guide federal agencies in advancing, using, and overseeing AI. The first principle is that AI must be safe and secure, meaning there must be robust, reliable, and standardized evaluations of AI systems, as well as policies or other mechanisms, including institutions, to test, understand, and mitigate risks from these systems before they are in use. This is particularly relevant, as the EO notes, to the biotechnology and cybersecurity industries. In meeting this principle, the administration will develop labeling and content provenance mechanisms to help determine when content is generated using AI and when it is not.

The next set of principles focus on promoting responsible innovation, competition, and collaboration to allow the United States to lead in AI. This includes a commitment to supporting American workers and furthering health equity and civil rights. It also focuses on upholding consumer protection laws and Americans' privacy and civil liberties.

The final set of principles seek to govern the Federal Government's own use of AI and increase its internal capacity to regulate and govern the responsible use of AI. This includes developing a framework to manage AI's risks, unlock AI's potential for good, and promote common approaches with other nations.

EO Healthcare Implications

To ensure safe and responsible use of AI in the healthcare industry, the EO directs the Department of Health and Human Services (HHS) to establish an HHS AI Task Force within one year. This task force shall develop a strategic plan that includes policies, and possibly regulatory action, on responsible deployment of AI and AI-enabled technologies in the healthcare sector, including research and discovery, drug and device safety, healthcare delivery and financing, and public health.

The EO directs HHS to identify appropriate guidance and resources to promote AI's deployment and use in a variety of settings and situations. This includes:

  • the development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery and financing, including quality measurement, performance improvement, program integrity, benefits administration, and patient experience, and considering appropriate human oversight of the AI-generated output;
  • identifying uses of AI that promote workplace efficiency and satisfaction, and the development and maintenance of documentation to help users determine appropriate and safe uses of AI in local healthcare settings;
  • monitoring long-term safety and performance of AI-enabled technologies, including clinically relevant or significant modifications and performance across population groups, and incorporating equity principles in AI-enabled technologies. This means using disaggregated data on affected populations and representative population data sets when developing new models, and monitoring algorithmic performance against discrimination and bias in existing models.

To protect personally identifiable information, the EO calls for incorporating safety, privacy, and security standards into the software-development lifecycle, including measures to address AI-enhanced cybersecurity threats.

The EO also directs HHS to consider appropriate actions to advance compliance and understanding of federal nondiscrimination laws by health providers that receive federal financial assistance, and its relationship to AI. This may include providing technical assistance or issuing guidance to healthcare providers and payers about their obligations under nondiscrimination and privacy laws as they relate to AI.

Notably, the EO directs HHS to develop a strategy for regulating the use of AI or AI-enabled tools in drug development processes. This strategy shall define the objectives, goals, and high-level principles required for appropriate regulation throughout each phase of drug development, identify areas where future rulemaking, guidance, or additional statutory authority may be necessary, and identify existing budget, resources, and personnel for such regulatory systems.

Lastly, the EO directs HHS, in consultation with the Secretary of Defense and Veterans Affairs, to establish an AI safety program that establishes a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings, as well as specifications for a central tracking repository for associated incidents that cause harm to patients, caregivers, or other parties.

Next Steps

While the EO provides up to one year for many of these actions to occur, it is possible the administration and these agencies will begin operationalizing and engaging industry to begin building AI frameworks and seeking necessary input to roll out AI systems and processes in key areas of the healthcare sector.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.