ARTICLE
31 July 2025

When Machines Fail: Understanding Liability In AI-Driven Medical Errors Within Nigeria's Healthcare System

OA
Olisa Agbakoba Legal (OAL)

Contributor

Olisa Agbakoba Legal (OAL) is a leading world class legal solutions provider with clients in diverse sectors of the Nigerian economy. Our diversified skills ensure that we provide innovative legal solutions to our clients. At OAL, we are always devoted to our EPIC values: our excellence, professionalism, innovation & commitment.
In recent years, Artificial Intelligence (AI) has made remarkable inroads into healthcare systems around the world, including Nigeria. From diagnosing illnesses using imaging algorithms...
Nigeria Food, Drugs, Healthcare, Life Sciences

Introduction

In recent years, Artificial Intelligence (AI) has made remarkable inroads into healthcare systems around the world, including Nigeria. From diagnosing illnesses using imaging algorithms to powering chatbot consultations in primary healthcare settings, AI is gradually transforming how patients are assessed, diagnosed, and treated. These technologies offer the potential to enhance speed, accuracy, and accessibility in medical care. Yet, despite these innovations, Nigeria's healthcare system continues to face significant challenges. Medical errors contribute to approximately 27.9% of patient fatalities, with root causes including inadequate training and staffing (43.8%), poor equipment and infrastructure (21.1%), diagnostic errors (15.6%), medication errors (12.5%), and healthcare-associated infections (23.4%).1 While AI has the potential to reduce some of these errors, it also introduces new risks.

The country's AI healthcare market is projected to grow significantly from $10 million in 2022 to $130 million by 2030, reflecting a compound annual growth rate (CAGR) of approximately 46%. A report by PwC revealed that 66.7% of Nigerian healthcare professionals believe AI can enhance human intelligence, while 77% believe machine learning can improve operational efficiency2. However, as AI becomes more integrated into healthcare delivery, it raises urgent questions about responsibility and legal liability. What happens when an AI tool misdiagnoses a patient? Who should be held accountable: the doctor, the hospital, or the technology developer? These are not merely theoretical dilemmas; they represent real and pressing legal challenges that must be addressed to ensure patient safety and justice.

This article explores the legal grey areas surrounding liability for AI-driven medical errors within Nigeria's healthcare system. It highlights existing legal frameworks, identifies gaps, and offers practical recommendations for reform, aiming to balance technological progress with the protection of patient rights.

AI's Growing Role in Nigerian Healthcare

AI is gradually being integrated into Nigeria's healthcare system, with applications in radiology, remote patient monitoring, triage chatbots, and electronic health records. Notable examples include LUTH's AI system for early breast cancer detection3 and Wellvis, a telemedicine platform improving rural healthcare access4.

AI's appeal lies in its potential to reduce human error, speed up diagnoses, and support overstretched medical staff, especially in a country with fewer than four doctors per 10,000 people.5

However, these benefits come with risks. AI systems depend on large, high-quality data sets, which are often lacking in low-resource settings.6 Faulty data can lead to misdiagnoses or harmful recommendations, raising not just medical but serious legal and ethical concerns.7

Understanding Medical Liability in Nigeria

Traditionally, when a patient suffers harm due to a medical error in Nigeria, legal responsibility falls on the human actors involved, most often the doctor, nurse, or hospital. The prevailing legal approach is grounded in the principle of negligence, which refers to a failure to exercise reasonable care that results in injury or death.

Under Nigeria's tort law system, a patient (the claimant) must establish three essential elements to succeed in a negligence claim:8

  1. That the medical professional owed them a duty of care;
  2. That there was a breach of that duty;
  3. That the breach caused the injury or harm suffered.

A significant judicial pronouncement on medical negligence can be found in the case of Ogunyinka v. Lagos University Teaching Hospital (LUTH)9. In this case, the Court of Appeal, Lagos Division, delivered a landmark judgment involving allegations of medical negligence that led to the death of a patient.

The patient's family alleged that LUTH staff delayed necessary surgical intervention, failed to provide adequate post-operative care, and committed medication errors. As a result of these failures, the patient eventually died due to complications. The Court of Appeal rejected LUTH's defense of contributory negligence and overturned the decision of the lower court. It ruled in favour of the plaintiff and held that:

  • LUTH breached its duty of care to the deceased;
  • The hospital's negligence directly led to the patient's death;
  • There was a failure to provide timely and adequate care, including delayed surgery and insufficient post-operative management.

This case illustrates the application of tort principles to medical malpractice and confirms that Nigerian courts are willing to hold healthcare institutions accountable when standards of care are breached.

Also read: Online Safety & Cybercrimes: Navigating Nigeria's Cybersecurity Compliance And Safety Laws

Professional Regulation and Statutory Remedies

In addition to civil liability, medical practice in Nigeria is regulated by statutory frameworks. The Medical and Dental Practitioners Act LFN 2004 empowers the Medical and Dental Council of Nigeria (MDCN) to regulate the conduct of registered professionals10. Disciplinary actions can range from suspension to permanent revocation of licenses11. In civil courts, damages may be awarded to patients who successfully prove malpractice or negligence. Furthermore, medical negligence entangled with criminal aspects can lead to charges under the Criminal Code Act12. In cases where medical negligence results in death, the culpable professional may face charges such as murder or manslaughter, contingent upon the circumstances13.

In addition to tort law, other frameworks apply:

Patients Bill of Rights (PBoR): Developed by the Federal Competition & Consumer Protection Commission (FCCPC) and Nigerian Medical Association (NMA), the PBoR affirms patients' rights to informed consent, privacy, emergency care, and respectful treatment. Launched in 2018, it draws from existing laws and professional ethics, and has been adopted by several hospitals across Nigeria to strengthen patient protection14.

Nigerian Data Protection Act 2023: Under the National Data Protection Act (NDPA) 2023, data subjects (including patients) have the right not to be subject to decisions made solely through automated processing, unless they provide consent or fall within certain legal exceptions15. The Act also mandates safeguards like human oversight and the right to contest such decisions16. However, it does not assign liability for harm caused by AI systems. Its focus is on data protection and patient rights, rather than compensation for AI-related medical errors. As such, legal accountability must still be pursued under tort law or other general legal frameworks.

Overall, these frameworks were designed with human decision-makers in mind. However, as healthcare increasingly relies on AI systems, it becomes unclear how these legal tools should apply when harm arises not from a person but from a machine's recommendation.

Where AI Complicates Things

Artificial Intelligence disrupts the traditional model of assigning liability in healthcare. The core question becomes: Who is responsible when an AI tool causes harm?

Let's take a typical scenario: A doctor uses an AI diagnostic tool to assess a patient's symptoms. The AI suggests a wrong diagnosis, which the doctor follows, leading to injury. Is the doctor negligent for trusting the AI? Is the hospital liable for deploying the technology? Or should liability fall on the developer of the software?

These are not theoretical concerns. AI is often treated legally as a "product," which opens the door to product liability claims. But conventional product liability assumes a product is static, a pill, a syringe, or a piece of equipment. AI, however, is dynamic. It evolves based on the data it processes and may change its behaviour over time, which may widen the scope of product liability17.

Further, many AI systems operate as "black boxes." They generate conclusions that even their developers can't always explain in detail18. This lack of transparency makes it difficult for courts to establish fault or trace how an error occurred.

Doctors may argue that they relied on a professionally certified tool. Hospitals may insist they provided adequate supervision. Developers may argue that the AI tool merely offered recommendations, not decisions. Each party may deny fault, leaving the patient in legal limbo.

Even if the doctrine of vicarious liability applies, institutions are held liable for the actions of their agents, even if the "agent" in this case is a machine. However, proving causation remains a major hurdle, as plaintiffs must trace harm directly to a machine decision, a task made more difficult by AI's often opaque "black-box" logic19.

This creates a pressing problem: Nigeria's current legal framework does not clearly assign responsibility for harm caused by AI systems in healthcare. Nigerian law does not yet explicitly address AI liability, nor does it define AI as a legal "person" capable of bearing responsibility. This gap leaves courts and practitioners to navigate uncharted waters using analogies drawn from product liability and corporate negligence doctrines20. Courts are therefore compelled to retrofit existing legal principles, such as the duty of care, the standard of a reasonable person, and the foreseeability of harm, to situations involving autonomous or semi-autonomous systems. This uncertainty not only creates obstacles for claimants seeking redress but also complicates risk management and insurance frameworks for healthcare providers and AI developers operating in the Nigerian market.

Lessons from Other Countries

As AI use in healthcare grows, countries are taking steps to address legal uncertainty around liability, offering key lessons for Nigeria.

  • EU: The proposed AI Liability Directive and Artificial Intelligence Act classify healthcare AI as "high-risk" and introduce strict liability, requiring only proof of harm, not fault.21
  • UK: Favours a "human-in-the-loop" model, ensuring clinicians remain accountable while using AI to support, not replace medical decisions.22
  • South Africa: Exploring updates to health and consumer laws, with proposals to embed AI tools within existing professional liability frameworks.23

These examples show that it's possible to regulate AI while protecting patients. Nigeria must decide whether to act proactively or wait until harm forces reform.

What Should Nigeria Do? (Recommendations)

To prepare for a future where artificial intelligence plays a central role in healthcare delivery, Nigeria must develop a legal and regulatory framework that assigns responsibility, promotes innovation, and protects patients. The following reforms are recommended:

  1. Define AI Liability in Healthcare
    Enact clear guidelines assigning responsibility among developers, hospitals, and clinicians for AI-related medical errors.24
  2. Certify and Regulate AI Tools
    Require pre-market testing, risk assessments, and approval of medical AI systems by an independent body, such as NAFDAC or a digital health agency.25
  3. Ensure Transparency and Disclosure
    Mandate that patients are informed when AI is involved in care and that AI systems provide explainable outputs to support legal accountability.26
  4. Create a Digital Health & AI Liability Commission (DHALC)
    Establish a multi-stakeholder body to set standards, investigate AI-related incidents, and resolve disputes.27
  5. Train Medical and Legal Professionals
    Incorporate AI ethics, data governance, and liability into medical and legal education to prepare professionals for AI-integrated care.28
  6. Introduce AI Liability Insurance
    Require hospitals and AI vendors to join a no-fault insurance scheme to compensate patients harmed by AI errors, supported by a risk-based premium model.

Conclusion

AI holds great promise for improving healthcare in Nigeria enhancing diagnostics, expanding access, and easing pressure on medical staff. However, when AI systems fail, the consequences can be severe, and Nigeria's current legal framework offers no clear path for accountability.

To protect patients and ensure justice, Nigeria must establish clear liability rules for AI in healthcare. Proactive regulation, investment in institutional capacity, and legal safeguards are essential to balance innovation with responsibility. Now is the time for legal and policy reform before technology outpaces oversight.

Footnotes

1. Journal of Healthcare Quality Research (2019). Medical errors in Nigerian hospitals.

2.Ugbomeh, W. O. (2024, June 15). Artificial intelligence in the Nigerian healthcare system: Revolutionizing access, efficiency, and care. ThisDayLive. https://www.thisdaylive.com/2024/06/15/artificial-intelligence-in-the-nigerian-healthcare-system-revolutionizing-access-efficiency-and-care-williams-ogochukwu-ugbomeh/?utm

3. Health Strategy and Delivery Foundation. (2024). Trends in health care – AI spotlight: AI's revolutionary impact on health care in Nigeria. https://hsdf.org.ng/trends-in-healthcare-ai-spotlight

4. Ibid.

5. Ibid (n.1)

6. World Health Organization (WHO), Global Health Observatory Data Repository: Nigeria Health Workforce Statistics, 2022.

7. Nivedhaa, N. (2024). A comprehensive review of AI's dependence on data. International Journal of Artificial Intelligence and Data Science (IJADS), 1(1), 1–11. https://www.researchgate.net, Paik, K. E., Hicklen, R., Kaggwa, F., Puyat, C. V., Nakayama, L. F., Ong, B. A., Shropshire, J. N. I., & Villanueva, C. (2023). Digital determinants of health: Health data poverty amplifies existing health disparities—A scoping review. PLOS Digital Health. https://pmc.ncbi.nlm.nih.gov

8. Adejumo, O. A., & Adejumo, O. A. (2020). Legal perspectives on liability for medical negligence and malpractices in Nigeria. Pan African Medical Journal. https://pmc.ncbi.nlm.nih.gov, Delta State Hospitals Mgt Board & Ors V. Onome Lpelr-59333(CA)

9. Ogunyinka v. Lagos University Teaching Hospital (2017) LPELR-42351(CA)

10. See Section 1 Medical and Dental Practitioners Act 2014, CAP M8 LFN 2004.

11. See Section 15 & 16 Medical and Dental Practitioners Act 2014, CAP M8 LFN 2004.

12. See Section 343 & 344 Criminal Code Act LFN 2004

13. Atoyebi, O. M. (2024). An insight into medical negligence under Nigerian jurisprudence. https://omaplex.com.ng

14. For more information, see https://fccpc.gov.ng.

15. See section 37 of the Data Protection Act 2023.

16. Ibid.

17.Ulfbeck, V. (2024). Product liability law & AI. Cambridge University Press. https://www.cambridge.org

18. Bottomley, D., & Thaldar, D. (2023). Liability for harm caused by AI in healthcare: An overview of the core legal concepts. Frontiers in Pharmacology. https://pmc.ncbi.nlm.nih.gov.

19. Choi, W., & Lam, R. X. (2024). Assigning Moral Responsibility for AI-Derived Errors in Healthcare: Shared Responsibilization Without Responsibility Gaps. Digital Society, 3, 55.

20. Mensah, G. B., Mijwil, M. M., Abotaleb, S. M., Eid, M. M., Dutta, P. K., & Addy, A. (2023). Assessing the Role Ghana's Public Health Act, 2012 (Act 851) Can Play in Oversight of Artificial Intelligence Healthcare Systems to Prevent Medical Errors and Improve Patient Safety. Babylonian Journal of Artificial Intelligence.

21. Diega, G. N. L., & Bezerra, L. C. T. (2024). Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive. International Journal of Law and Information Technology, 32; Artificial Intelligence Liability Directive available at https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf.

22. Lee, L., Salami, R. K., Martin, H., Shantharam, L., Thomas, K., et al. (2024). "How I would like AI used for my imaging": children and young persons' perspectives. European Radiology, 34, 7751–7764.

23. Naidoo, S., Bottomley, D., Naidoo, M., Donnelly, D., & Thaldar, D. (2022). Artificial intelligence in healthcare: Proposals for policy development in South Africa. South African Journal of Bioethics and Law, 15, 11–16.

24. Mensah, G. B., Nyante, E. A., & Addy, A. (2024). Conducting a Comparative Analysis of Medical Negligence Laws in Ghana's Courts Act 1993 (Act 459) and Other African Common Law Countries Concerning Artificial Intelligence Systems. International Journal For Multidisciplinary Research. https://www.semanticscholar.org.

25. Adekunle, & Dakare, O. (2020). Sustainable manufacturing practices and performance of the Nigerian table water industry: A structural equation modeling approach. Management of Environmental Quality, 31, 1003–1022. https://www.emerald.com.

26. S., S., & Maheshwari, S. (2024). Navigating AI in Healthcare: Examining Medical Liability and the Imperative of Informed Consent in Addressing AI-driven Prescription Errors. International Journal For Multidisciplinary Research. https://www.semanticscholar.org.

27. Naidoo, T. (2024). Overview of AI regulation in healthcare: A comparative study of the EU and South Africa. South African Journal of Bioethics and Law.

28. Bottomley, D., (2023). Liability for harm caused by AI in healthcare: An overview of the core legal concepts. Frontiers in Pharmacology, 14.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More