Healthcare and legal professionals dealing with medical negligence need to be aware of these issues as they navigate claims involving AI. As AI continues to evolve, so too will the legal frameworks, insurance policies and considerations surrounding its use in healthcare. Whilst the use of AI technology should be embraced, caution is strongly advised as the legal profession prepares to navigate the AI field.
Amidst the ever-evolving landscape of healthcare, artificial intelligence (AI) is setting a new precedent for medical professionals. As AI technologies attempt to transform medical care, diagnosis and treatment, they also raise significant legal questions and challenges. This article explores how AI is not just a tool for clinicians but a focal point for legal scrutiny, compliance, and ethical considerations in the medical field.
AI developments
AI is making significant strides in the medical field, particularly in diagnosis and treatment:
- Skin cancer diagnosis: AI systems have been developed that rival dermatologists in accuracy when diagnosing skin cancer, as reported in "Annals of Oncology".
- Diabetic retinopathy diagnosis: DeepMind, a Google subsidiary, has created an AI that diagnoses diabetic retinopathy and macular edema from eye scans. This technology has been shown to match the accuracy of human experts, potentially allowing for earlier detection and treatment of these conditions.
- Stroke treatment: AI technologies are also being used to improve the treatment of stroke patients. An AI system developed by Viz.ai uses computer vision to analyse brain scans and detect signs of a stroke, speeding up the diagnosis process and potentially improving patient outcomes.
- Ambient Voice Technology ('AVT'): Explored and used within the NHS to improve patient care and operational efficiency. AVT systems are designed to capture and interpret human speech, converting it into actionable data or documented text.
These advancements highlight AI's potential to improve medical care by enhancing both the speed and accuracy of medical care, diagnoses and treatments.
But what is the public/private position in relation to AI usage in the healthcare sphere?
What is the NHS position on AI?
NHS England has issued a warning to trusts and GPs about the risks associated with adopting non-compliant AI technologies, specifically ambient voice technology (AVT) for audio transcription. A letter from the national chief clinical information officer, Alec Price-Forbes, highlighted concerns about clinical safety, data protection breaches, and financial exposure. NHS bodies have been instructed to halt engagements with non-compliant AVT suppliers.
The government plans to accelerate the deployment of these systems as part of its 10-Year Health Plan, aiming to save clinicians time on administrative tasks. However, the rapid growth and lack of clear regulation in the AVT market have prompted NHS England to develop a national delivery proposal. This proposal will support the rollout of standardized AVT solutions across all care settings in a safe and compliant manner.
NHS England has set assurance standards for AVT solutions, which include data protection, clinical safety, and system integration requirements. These standards are crucial for ensuring that the technologies are safe and effective.
Despite the potential benefits of AVT in enhancing efficiency and reducing administrative burdens, NHS England has stressed the importance of compliance with regulatory standards to avoid risks. The letter also mentioned that further communications would soon provide more details on the national delivery proposal for AVT solutions.
What is the private health position on AI?
Whilst the private sphere is spearheading the use of AI in the medical field what does this mean for insurance policies attempting to regulate this field?
'Silent AI' is a hot topic amongst policy underwriters due to the result of policies being drafted prior to the AI craze. Heavy contemplation is given as to whether AI usage is covered in previous policies, which can essentially leave AI coverage in policies subject to chance as the wording may or may not cover it and leaves the question if specific thought or consideration has been given to AI usage. We are now starting to see insurers refer specifically to AI, either by way of exclusion or affirmative cover, but most policies are still silent.
It is recommended that insures make a firm decision as to their approach to AI and for this to be specifically and expressly addressed it in policy wording.
Medical negligence claims
In the context of medical negligence claims in England, the integration of AI in healthcare does introduce several potential risks and complexities:
Liability and accountability
Determining liability in cases where AI systems are involved can be challenging. If an AI system contributes to a medical error, it may be difficult to ascertain whether the fault lies with the healthcare provider, the technology developer, or both.
Standard of care
AI might alter the established standards of care. As AI technologies are adopted, what is considered an acceptable standard of care might shift, potentially leading to new benchmarks for negligence claims. With reference to the infamous Bolam test (whether a doctor acted in accordance with a responsible body of professional opinion), with such advancements will we see the use of AI becoming part of the required standard of care?
Informed consent
There are concerns about how well patients are informed about the use of AI in their care. Patients must be made aware of AI involvement in their diagnosis or treatment and the potential risks associated. Failure to do so could lead to consent-related issues.
Data protection and privacy
AI systems often require large datasets, which can include sensitive patient information. There is a risk of potential data breaches or misuse of data, which could lead to claims if patient data is mishandled or exposed.
Reliability and errors
Like any technology AI systems, can produce errors. Depending on the nature of the error, this could lead to incorrect diagnoses, inappropriate treatment recommendations, or other issues that harm patients.
So, what does this mean?
Healthcare and legal professionals dealing with medical negligence need to be aware of these issues as they navigate claims involving AI. As AI continues to evolve, so too will the legal frameworks, insurance policies and considerations surrounding its use in healthcare. Whilst the use of AI technology should be embraced, caution is strongly advised as the legal profession prepares to navigate the AI field.
In summary, while AI offers significant opportunities to improve healthcare efficiency, careful steps must be taken to ensure that these technologies are adopted in a controlled and compliant manner to safeguard clinical safety and data protection
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.