- in Australia
- with readers working within the Media & Information and Law Firm industries
- within Food, Drugs, Healthcare, Life Sciences, Tax and Employment and HR topic(s)
- with Senior Company Executives, HR and Finance and Tax Executives
The integration of artificial intelligence (AI) into healthcare is transforming the industry, offering unprecedented opportunities to improve patient outcomes, streamline processes, and reduce costs. From diagnostic tools powered by machine learning to AI-driven surgical robots, the potential applications of this technology are vast.
However, alongside these advancements come significant medico-legal risks that healthcare providers, technology developers, and legal professionals must navigate carefully.
This article explores the key legal challenges associated with the implementation of AI in healthcare and provides practical insights for mitigating these risks.
What is AI
AI refers to the simulation of human intelligence in machines that are programmed to think, learn, and make decisions.1 AI systems can perform tasks that typically require human intelligence, such as problem-solving, understanding natural language, recognising patterns, and making predictions.2
There are different types of AI, ranging from narrow AI, which is designed for specific tasks (such as virtual assistants or recommendation systems), to general AI, which has the ability to perform any intellectual task a human can do. AI is used in various fields, including healthcare, finance, law, transportation, and entertainment, to improve efficiency and decision-making.3
The role of AI in modern healthcare
Through technologies like machine learning, natural language processing and robotics, AI is being deployed across the entire healthcare spectrum, promising significant advancements in patient care.4
Some of the key benefits of AI use in healthcare include:
- Improved diagnostics: AI can analyse medical data (such as imaging scans) with high accuracy, helping to detect diseases like cancer, heart conditions, and neurological disorders earlier and more reliably.5
- Personalised treatment plans: AI can process patient data to recommend tailored treatment plans based on individual health profiles, genetics, and medical history.6
- Enhanced efficiency: Automating administrative tasks (such as scheduling, billing, and record-keeping) allows healthcare professionals to focus on patient care and reducing costs.7
- Predictive analytics: AI can predict patient outcomes, disease progression, and potential complications, enabling proactive interventions.8
- Drug discovery and development: AI accelerates the process of identifying potential drug candidates, reducing the time and cost of bringing new treatments to market.9
- Telemedicine and virtual health: AI-powered tools enable remote consultations, monitoring, and diagnosis, improving access to healthcare for patients in rural or underserved areas.10
- Improved patient monitoring: Wearable devices and AI algorithms can track vital signs and alert healthcare providers to potential health issues in real time.11
- Enhanced surgical precision: Robotic-assisted surgeries powered by AI can improve accuracy, reduce recovery times and minimise risks.12
- Medical research: AI can analyse vast amounts of data to identify trends, correlations and insights, advancing medical research and innovation.13
Despite these advancements, the use of AI in healthcare does present risks. The reliance on algorithms to make, or assist in medical decisions introduces new complexities, particularly when errors occur or when the technology fails to perform as expected.
Medicolegal risks of AI in healthcare
As AI assumes increasingly autonomous roles in clinical decision-making, the traditional lines of accountability blur, creating a complex medicolegal environment that challenges existing frameworks of medical liability.
Liability for errors
One of the most pressing legal issues associated with AI in healthcare is determining liability when errors occur. For example, if an AI system misdiagnoses a condition or recommends an inappropriate treatment, who is held accountable? Is it the healthcare provider who relied on the AI, the developer of the AI system, or the healthcare institution that implemented the technology? This ambiguity can complicate legal claims and create challenges for patients seeking compensation.
The traditional principles of medical negligence, which require proof of a duty of care, breach, causation and damage, may not easily apply in cases involving AI. This is particularly true when the decision-making process of the AI is opaque, a phenomenon often referred to as the 'black box' problem.14
AI systems often operate as 'black boxes', meaning their decision-making processes are not always transparent or easily understood, even by their developers. This lack of transparency raises critical questions about liability in cases where an AI system produces an incorrect diagnosis, treatment recommendation, or other adverse outcomes.15
Liability for an AI error in healthcare is yet to be tested in Australia. However, in America, the 'Watson for Oncology' (WFO) clinical decision-support system created by IBM is a prime example of the challenges associated with the use of AI in healthcare. The WFO used AI algorithms to assess medical records and assist physicians with selecting cancer treatments for their patients.16 The software received significant criticism after reports alleged that the WFO provided inappropriate and unsafe treatment recommendations.17 The program was ultimately discontinued in 2023.
Data privacy and security concerns
The collection, storage, and processing of sensitive health information must comply with data protection laws, such as the Privacy Act 1998 (Cth).
AI systems in healthcare rely heavily on vast amounts of patient data to function effectively.18 This reliance on data raises significant concerns regarding data privacy and security for healthcare providers, for whom data security is already a priority given the amount of personal data that flows through health systems.19
The risk of data breaches is a critical concern, as unauthorised access to patient data can lead to identity theft, financial fraud, and reputational harm. For instance, a cyberattack on an AI-powered health platform could expose the personal health information of thousands of patients, resulting in legal claims and regulatory penalties for the entity responsible.
As illustrated in Australian Information Commissioner v Australian Clinical Labs Ltd (No 2) [2025] FCA 1224, failure to take reasonable steps to protect individuals' personal information, or to provide notification of eligible data breaches, can result in substantial penalties under the Privacy Act 1988 (Cth) and Australian Privacy Principles. This case demonstrates the importance of implementing robust data security measures and timely breach notification processes, both of which are particularly challenging for complex AI systems that may introduce new cybersecurity risks.
Bias and discrimination
AI systems are only as unbiased as the data on which they are trained. If the training data used to develop an AI system is unrepresentative or contains inherent biases, the system may perpetuate, or even exacerbate those biases. This can lead to discriminatory outcomes in healthcare, particularly for marginalised or underrepresented groups.20
For example, an AI algorithm trained on data predominantly from Caucasian patients may perform poorly when applied to patients from other ethnic backgrounds, resulting in misdiagnoses or suboptimal treatment recommendations. Such outcomes not only undermine the quality of care, but also expose healthcare providers to legal claims of discrimination under Australian anti-discrimination laws - including the Age Discrimination Act 2004 (Cth), Disability Discrimination Act 1992 (Cth), Racial Discrimination Act 1975 (Cth), and the Sex Discrimination Act 1984 (Cth).21
Informed consent and patient trust
The use of AI in treatment decisions significantly compromises the process of obtaining informed consent. Patients need to understand not only the risks and benefits of the proposed treatment, but also the precise role AI played in shaping the treatment option, including its limitations (e.g. bias or error). Further, concerns surrounding data privacy, model accuracy and the reduction of human interaction can lead to patient mistrust, potentially affecting compliance and the critical doctor-patient relationship.22
Although patients have a right to understand the risks and benefits of treatment options which incorporate the use of AI, explaining the complexities of AI algorithms to patients in a comprehensible manner can be challenging. There is a risk that patients may not fully appreciate the implications of AI-driven decisions, potentially undermining their ability to provide truly informed consent.
Loss of clinical skills
An increasing dependency on AI diagnostic and management tools carries the risk of clinicians being deskilled. Doctors and other healthcare practitioners may become complacent, delegating cognitive responsibility to the AI. This over-reliance risks overlooking critical clinical details or applying insufficient human reasoning to the available evidence, which can lead to missed diagnoses when the AI system inevitably fails or provides incorrect guidance.23
Even technology leaders such as Elon Musk have suggested that the rapid advancement of AI could make traditional medical training 'pointless' in the future,24 a claim that underscores both the optimism and the anxiety surrounding AI's evolving role in clinical decision-making.
Mitigating medico-legal risks
To utilise the transformative power of AI while safeguarding patient safety and mitigating medicolegal risks, action is required in regulation, policy and clinical practice.
Regulatory frameworks
Clear, specific regulations are essential for the safe development and deployment of AI in healthcare. In the UK, oversight by bodies such as the Medicines and Healthcare products Regulatory Agency and the Information Commissioner's Office creates a complex compliance environment requiring strong governance frameworks.25
In Australia, the Therapeutic Goods Administration regulates AI-based medical devices under a risk-based classification system, while the Office of the Australian Information Commissioner (OAIC) enforces privacy standards for health data. These overlapping requirements pose challenges for developers and providers, making awareness and compliance critical.
Professional bodies including the Royal College of Physicians and the Australian Medical Association stress the need for adaptable regulatory frameworks and collaboration among regulators, industry, and clinicians to ensure safe, effective AI use, and mitigate medico-legal risks.26
There is a critical need for clear, specific regulations governing the development, validation, and deployment of AI in healthcare, focusing on product safety and mandatory standards.
Whilst there is no singular mode of regulation of AI in healthcare, it is important for healthcare providers to be aware of the various regulatory organisations which oversee the use of AI and ensure that their use of AI complies with both professional and regulatory guidelines.
Data governance and privacy
AI systems depend on high-quality, lawful, and fair use of data. Organisations must implement privacy-by-design measures, strong security controls, and transparent consent practices, particularly when handling sensitive health information. In Australia, the OAIC provides detailed guidance for healthcare providers under the Privacy Act 1988 (Cth) - covering collection, use and disclosure, secondary uses (such as research), rights of access and correction, and breach response.
Embedding OAIC's recommended steps will allow healthcare providers to mitigate legal exposure and maintain public trust.27
Where health information is used for research or service management, organisations should apply the National Health and Medical Research Council guidelines approved under s95A of the Privacy Act 1988 (Cth), which outline how to lawfully collect, use and disclose data in the public interest.28
Risk management in clinical practice
Healthcare providers should be trained in AI limitations, hallucinations and bias, and understand when to override AI recommendations. Further, prior to use, AI tools must be validated in representative, real-world populations and workflows.
Healthcare practitioners and organisations should also establish pathways for monitoring and reporting AI-related adverse events and near misses, and ensure rapid feedback into product updates and clinical protocols.
Ultimately, the future of AI in healthcare demands a collaborative effort among technologists, clinicians, legal experts, and policymakers to establish robust, transparent, and adaptive frameworks that balance innovation with patient safety and clear accountability.
What this means for you
Medical practitioners
AI should be viewed as a tool that assists medical practitioners, rather than replaces them. It can enhance decision-making, improve diagnostic accuracy, and streamline administrative tasks. However, the final judgment and all patient care decisions remain the responsibility of the medical practitioner.
Medical practitioners must remain aware of their legal and professional obligations when using AI. They are ultimately accountable for the care they provide, even when AI tools are involved. This means medical practitioners should verify AI outputs before acting on them, document their clinical reasoning when using AI-assisted decisions, and stay informed about updates to AI tools to ensure they remain validated for their clinical context.
Compliance with AHPRA standards, privacy laws, and hospital governance policies is essential.
The insurance landscape
Tego Insurance is currently developing an AI insurance product, which when completed, would be the first of its kind in Australia. Eric Lowenstein, CEO of Tego Insurance,29says:
'AI exposures are evolving faster than traditional insurance products can keep up with. It won't be long before we start seeing broad AI exclusions written directly into professional indemnity, malpractice and liability policies. We see AI underpinning whole industries as it becomes embedded across operations and decision-making.'
Instead of attempting to force AI into existing policies, Lowenstein believes it will be more beneficial in the long-run to create a new AI-specific cover that accounts for AI decision-making processes, behaviours and inherent risks. Without such a dedicated AI policy, practitioners and healthcare providers may face significant exposure if traditional indemnity policies are relied upon.
The way forward
The integration of AI into healthcare presents transformative opportunities to enhance patient care, improve diagnostic accuracy, and streamline operational efficiency. From AI-driven diagnostic tools to robotic-assisted surgeries, the potential benefits are vast and far-reaching. However, these advancements are accompanied by significant medico-legal risks, including issues of liability for errors, data privacy and security concerns, potential biases in AI algorithms, challenges in obtaining informed consent, and the risk of clinicians being deskilled.
To navigate these challenges, it is imperative for healthcare providers and their insurers, technology developers and legal professionals to collaborate in establishing robust regulatory frameworks, implementing strong data governance practices, and fostering a culture of accountability and transparency.
Healthcare practitioners must remain vigilant, ensuring that AI is used as a tool to support, rather than a replacement for sound clinical judgment.
Exciting and challenging times are ahead for all who are navigating this space.
Footnotes:
1 'What is artificial intelligence (AI)?' (9 August 2024). C. Stryker, et al..IBM.
2 'What is AI? All you need to know about artificial intelligence' (2024) International Organization for Standardization.
3 'Types of Artificial Intelligence' (12 October 2023).IBM.
4 'The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century' (2024). S.M. Varnosfaderani, Bioengineering, 11 (4), 1-38.
5 'AI in diagnostic imaging: Revolutionising accuracy and efficiency' (2024). M. Khalifa et al., Computer Methods and Programs in Biomedicine Update, 5 (100146), 100146-100146.
6 'Artificial intelligence (AI) in personalized medicine: AI-generated personalized therapy regimens based on genetic and medical history: short communication.' (2023). A. D Parekh et al., Annals of Medicine and Surgery, 85 (11), 5831.
7 'Artificial Intelligence in healthcare'. (5 March 2025) European Commission Public Health.
8 'Unveiling the Influence of AI Predictive Analytics on Patient Outcomes: A Comprehensive Narrative Review' (2024). D. Dixon et al., Cureus, 16 (5), 1-16.
9 'The Role of AI in Drug Discovery: Challenges, Opportunities, and Strategies' (2023). A. Blanco-González et al., National Institute of Health, 16 (6), 891-891.
10 'The Impact of Artificial Intelligence on Remote Healthcare: Enhancing Patient Engagement, Connectivity, and Overcoming Challenges' (2025). U. T Chaturvedi et al., Intelligent Pharmacy, 2949-866X.
11 'Wearable AI to enhance patient safety and clinical decision-making' (2025). A. Mahajan et al., Npj Digital Medicine, 8 (1).
12 'The rise of robotics and AI-assisted surgery in modern healthcare' (2025). J Ng, Journal of Robotic Surgery, 19 (1).
13 n 4.
14 'What is black box artificial intelligence (AI)?' (29 October 2024). M Kosinski, IBM.
15 'Evaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: a qualitative study of healthcare professionals' perspectives in the UK'. (2025). S. C. Nouis et al., BMC Medical Ethics, 26, 89.
16 'IBM's Watson supercomputer leading charge into early melanoma detection' (7 March 2017). Y Redrup, Australian Financial Review.
17 'Case Study 20: The $4 Billion AI Failure of IBM Watson for Oncology' (7 December 2024). Henrico Dolfing.
18 'Artificial intelligence in medical education'. (2023). Australian Journal of General Practice.
19 'Cybercriminals love healthcare - AI could be making it easier' (18 August 2025). Vanessa Seah, Avant.
20 'Ethical and Bias Considerations in Artificial intelligence/machine Learning' (2024). M Hannah et al., Modern Pathology, 38 (3), 1-13. ScienceDirect.
21 'Ethical and legal considerations in healthcare AI: innovation and policy for safe and fair use' (2025). T Pham, Royal Society Open Science, 12 (5).
22 'Artificial intelligence and the law of informed consent' (2024). I. G. Cohen et al., Edward Elgar Publishing EBooks, 167-182.
23 'AI in clinical diagnostics: Is overreliance eroding clinical expertise?' (2025). F. A. Khan, PLOS Digital Health, 4 (8), e0000959-e0000959.
24 'Elon Musk Believes Going To Medical School Is "Pointless". Here's Why' (12 January 2026). NDTV.
25 'A pro-innovation Approach to AI regulation: Government Response'. (6 February 2024). Department for Science, Innovation & Technology (GOV.UK).
26 'Regulation "must keep pace" with AI revolution: RACGP'. (2023). NewsGP;' Healthcare sector approach to AI required'. (27 July 2023). Australian Medical Association.
27' Guide to Health Privacy' (10 March 2023). OAIC.
28' Guidelines approved under Section 95A of the Privacy Act 1988'. (2014). National Health and Medical Research Council.
29 Tego Insurance.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.