Transformational Artificial Intelligence: Prioritizing AI In Healthcare While Maintaining Legal Compliance

OD
Ogletree, Deakins, Nash, Smoak & Stewart

Contributor

Ogletree Deakins is a labor and employment law firm representing management in all types of employment-related legal matters. Ogletree Deakins has more than 850 attorneys located in 53 offices across the United States and in Europe, Canada, and Mexico. The firm represents a range of clients, from small businesses to Fortune 50 companies.
In February 2019, President Trump signed an executive order titled "Maintaining American Leadership in Artificial Intelligence," also known as the American AI Initiative ...
United States Technology

In February 2019, President Trump signed an executive order titled "Maintaining American Leadership in Artificial Intelligence," also known as the American AI Initiative, that aims to increase the use of artificial intelligence (AI) nationwide. The executive order identifies various federal AI-related policies, principles, objectives, and goals, including: increased federal investment in AI research and development, better education of workers relating to AI, promotion of national trust in AI systems, an emphasis on improved access to the cloud computing services and data needed to build AI systems, the creation of technical and regulatory standards relating to AI, and the promotion of AI-related cooperation with foreign powers.

According to Michael Kratsios, deputy assistant to the president for technology policy, the executive order, and the policies underlying it, are designed to "prepar[e] America's workforce for [the] jobs of today and tomorrow."

AI Use Cases in Healthcare

As the executive order underscores, AI impacts every sector of the American workforce, especially healthcare. The healthcare industry is using AI to improve quality of care as well as drive down costs. AI growth in this sector is underscored by a CB Insights study showing that since 2013, healthcare AI startups have raised approximately $4.3 billion in funding for AI development, research, and production.

Innovative AI developments in healthcare include the following:

  • Diagnostic research and development. The ability of AI to identify disease-related risks is quickly developing. For example, one technology company has developed an artificial neural network (a computing system inspired by the biological neural network that involves various machine learning algorithms working together to process complicated data inputs) that uses retinal images to assist in the identification of cardiovascular risk factors. Similarly, Stanford University researchers have developed an algorithm to assist in the identification of skin cancer using neural networks.
  • Do-it-yourself diagnostics. Smartphones, wearables, and other connected personal devices will continue to become resources for at-home diagnostics, sometimes eliminating the need to go to a doctor's office. For example, technology companies have developed apps that use image recognition algorithms to identify skin cancer risks and to diagnose urinary tract infections.
  • AI and medical records. While many large health systems already use electronic medical records, the medical records ecosystem continues to evolve. Various companies have developed and now offer programs that analyze unstructured patient medical records by using AI tools like machine learning (a type of AI that involves algorithms that can learn from data without relying on rules-based programming) and natural language processing (a type of AI in which computers can understand and interpret human language) to deliver meaningful and searchable data, such as diagnoses, treatments, dosages, symptoms, etc.

Key Questions to Ask to Evaluate Legal Compliance

AI developments are likely to accelerate over the next decade. As AI expands into modern workplaces—healthcare and otherwise—employers may want to consider the following questions to ensure legal and regulatory compliance from a labor and employment perspective:

  1. Through the technology, is data being collected, stored, or transmitted? As the healthcare examples discussed above highlight, many AI systems collect, store, and/or transmit enormous amounts of data—often sensitive data. Various international, federal, and state rules and common law govern the collection, storage, and movement of data, as well as privacy rights. This area of the law is evolving, so employers may want to carefully review their obligations and stay up to date.
  2. Is the technology changing employees' terms and conditions of employment? AI is changing employees' working conditions, from minor workflow alterations to more significant changes like the displacement of employees through layoffs or reductions in force. In a unionized workforce, many changes to the terms and conditions of employment are subject to the collective bargaining process. Moreover, regardless of whether a union is in place, changes to employees' working conditions may implicate other state and federal laws like the Worker Adjustment and Retraining Notification Act of 1988, which mandates notification obligations before certain types of workplace employee reductions, and relevant discrimination statutes, such as the Age Discrimination in Employment Act of 1967.
  3. Is the technology changing the physical working environment? Under the Occupational Safety and Health Act, employers have a legal duty to maintain a safe workplace. The Occupational Safety and Health Administration has developed specific standards for employers utilizing robotics to ensure that the technology is safe for employees. Depending on the nature and function of the technology at issue, various additional federal and state workplace safety laws may also be implicated. Employers may want to be mindful of these rules and ensure compliance with them.
  4. Is the technology affecting employment-related decision-making? Employers are increasingly using AI to analyze job applicants and make day-to-day employment-related decisions. For example, some employers are using AI-powered software programs to auto-screen resumes as a traditional recruiter would, and others are using AI recruiting assistants to communicate with applicants through messaging apps. The information used to structure an AI algorithm could be unintentionally biased, which could potentially lead to discrimination claims by employees and/or applicants. If employers are using AI either directly or indirectly to make employment-related decisions, they may want to evaluate employment discrimination risks and mitigate against them, if possible, by, for example, understanding the data used to build out and/or train the AI at issue and regularly auditing decisions made through the use of AI.

Because of the pace of AI development and the prioritization of its growth, employers may want to continue harnessing the opportunities AI presents while staying mindful of legal and regulatory compliance issues.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More