We can address some inherent challenges and ethical risks by prioritizing transparency in AI decision-making and ensuring its explainability to medical professionals and promoting inclusivity to avoid bias. Yet, there is a need for more specific guidelines to adoption and usage in the healthcare context, in addition to the focus upon general regulation of AI.
The rise of digital health tools (health-tech platforms, wearables, robotics) and integration of Artificial Intelligence (AI) is transforming healthcare, unlocking new possibilities for diagnosis, treatment, and care. The umbrella term 'AI' refers to a collection of different types of technologies.
Specifically in the field of medical diagnosis and preventive care, Machine learning (ML) algorithms, trained on vast health data, identify patterns for data organization and potential health risk prediction. Deep learning, a powerful subset of ML, tackles intricate tasks like early disease detection from medical images.
Natural language processing (NLP) empowers ML to understand language, facilitating report translation, text analysis, and healthcare chatbots. Additionally, robotic process automation (RPA) utilizes AI for workflow automation, exemplified by robotic surgical tools that enhance precision and deliver real-time data to doctors.
This amalgamation of AI advancements empowers healthcare with a revolutionary suite of tools that aid not only with diagnosis, forecasting illness and assistance with preventive care but also with drug discovery, patient interaction support and much more.
AI in Diagnosis
AI systems in diagnosis aid healthcare professionals in making clinical decisions by identifying medical conditions, often by using patient data, medical records, and clinical information and in selecting appropriate treatment plans.
Recent successful use cases of AI in the field of diagnosis include use of AI systems for medical image analysis of outputs from computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, etc. to detect underlying conditions with better efficiency and accuracy.
Notably, the Comprehensive Archive of Imaging ("CHAVI"), India's first de-identified cancer image bank, was recently launched by Tata Medical Centre and the Indian Institute of Technology, Kharagpur, which aims to enhance AI radiomic research for cancer treatment.
Challenges & Risks
While AI in diagnosis clearly offers tremendous advantages, like any novel technological advancements, it is not free from risks. Its potential drawbacks require careful attention before their wide deployment, especially when considering key principles of health-tech AI regulation.
While there are many plausible causes for why AI may fail or not work as intended, certain key concerns which have been widely discussed include:
- Errors and flaws in AI systems leading to wrong diagnosis and recommendations by doctors. For instance, AI image analysis may make mistakes, like misdiagnosing a surgical scar as cancer.
- Opacity and lack of explainability (in the manner as intended by medical professionals) behind AI processes potentially making it difficult for medical professionals to evaluate and rely on AI generated advise.
- AI systems malfunctioning due to unexpected changes in data or becoming outdated as healthcare practices evolve.
- Data used to train AI can being incomplete, biased, or simply not reflective of real-world scenarios, leading to inaccurate diagnoses or biased treatment recommendations.
Dealing with liability for Harm under regulatory frameworks
Most efforts aimed at developing appropriate regulatory frameworks around design and deployment of AI systems are currently inclined towards taking a preventive approach towards risk mitigation.
Existing or proposed regulatory frameworks, including EU Artificial Intelligence Act, or the proposed Canadian Artificial Intelligence and Data Act, rely on a combination of risk-based prohibitions, transparency, explainability, reporting or auditing requirements, human oversight, risk management frameworks, data protection compliances along with robustness, accuracy, and security criteria.
However, as AI applications rapidly evolve, even nimble regulators may find it difficult to keep pace with new areas of specific regulation. Regulating AI systems such as those used in medical diagnosis would therefore soon require specific principles of determining and apportioning accountability and liability to be set-out, especially ones which are better suited to the specific context.
The issue of determining responsibility and liability for the harm caused by AI systems in certain sectors is complicated due to the 'many hands problem', making it difficult to assign responsibility, and guard against unintended consequences for collective actions.
In a sensitive sector like healthcare, addressing this is more critical due to the seriousness of potential adverse consequences. As the AI ecosystem in healthcare consists of various stakeholders, including those who create AI for healthcare, healthcare organizations and practitioners, beneficiaries of the system as well as data subjects and data collectors, understanding the broad nature of errors and harm that could occur from various sources is important to assign responsibility fairly, especially for diagnosis, where the beneficiaries are at maximum risk.
AI developers and providers
As healthcare professionals lack the expertise to assess complex AI systems, it is arguable that the AI developer should largely carry the responsibility for harm that could be directly attributed to the technology, such as from errors and glitches in the technology itself or where the technology does not operate as intended or promised.
Instances of liability for developers and providers may also include failure to appropriately and accurately disclose the operating parameters and system limitations.
Healthcare providers
Liability for harm may be attributed to healthcare providers if they do not exercise their own discretion with AI recommendations or overlook the system's built-in boundaries and limitations, operating conditions as disclosed by the system's developers.
Likewise, even where the technology contains errors, there may be instances where there are contributing elements from the healthcare providers leading to harm such as over-reliance on technology, failure to account for patent errors, known malfunctions or gross negligence.
Since healthcare providers are subject to a greater duty to care owing to their specialized role, the healthcare regulators may be inclined to import similar principles of liability on them, in respect of their usage of AI tools.
Liability for beneficiaries?
There may be certain AI based healthcare devices or systems where the beneficiaries can directly access the system without their healthcare provider necessarily being present at the time of such interaction.
For instance, a few certified AI based applications and devices already possess the ability to diagnose and detect diseases such as skin cancer and cardiovascular diseases, besides having the ability to also provide health recommendations.
While a case may be made to hold the beneficiaries accountable for any harm caused on account of their mis-use or improper usage of technology, such instances must be kept extremely rare and specific.
Prognosis for dealing with liability
For dealing with liability arising out of AI tech in healthcare, the framework for the accountability would need to be adapted and built on the existing fundamental principles of law of contract, product liability in healthcare, pharmaceuticals and medical devices, and medical negligence, with appropriate tweaks to address the peculiar challenges posed by AI based systems and products.
Given the high risk involved, regulatory frameworks should clearly provide for human oversight in AI-assisted healthcare, aligning with "human-in-the-loop" design principles, until sufficient reliability is demonstrated by systems and the industry achieves a critical level of accuracy.
The constituents of the AI healthcare ecosystem such as health care providers, patients, hospitals and creators or developers of AI tools and systems, would need to focus upon, not only the regulatory and liability framework but also on the contractual obligations amongst them.
Accordingly, the contractual understanding between the constituents would need to be carefully defined, especially with regards to the extent of their liability and exclusions, to ensure a suitable way to deal with unintended consequences leading to harm and statutory liability.
Towards a Safe AI System in Healthcare
AI is revolutionizing healthcare delivery, offering a future of increased precision, personalized care, and improved efficacy across diverse applications – from diagnostics to surgical assistance. However, responsible adoption is crucial.
We can address some inherent challenges and ethical risks by prioritizing transparency in AI decision-making and ensuring its explainability to medical professionals and promoting inclusivity to avoid bias. Yet, there is a need for more specific guidelines to adoption and usage in the healthcare context, in addition to the focus upon general regulation of AI.
This can be achieved through sector specific regulations, or guidelines issued from appropriate associations of medical professionals and institutions. Through open dialogue among all stakeholders, a co-regulatory approach can harness AI's full potential to not only optimize healthcare outcomes but also uphold the core values of patient safety and care.
Originally published by ETGovernment, Jun 27, 2024.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.