ARTICLE
9 September 2025

AI On Trial: Rethinking Liability In India's Current Legal Framework

LO
Legacy Law Offices

Contributor

Legacy Law Offices LLP is a multi-disciplinary law firm with a diversified portfolio of professional legal services. The firm has offices in New Delhi, Chandigarh, Solan and Kurukshetra with an associate office in Mumbai as well as a representative office in the Kingdom of Saudi Arabia. Lawyers of the firm provide legal services across India and have worked on various developmental projects across countries in SAR, CAREC, MENA, and Africa. Legacy Law Offices LLP is also empanelled with the PPP authority of Bangladesh. The law firm is acknowledged as a "Leading Law Firm" by various leading global legal directories, including Legal500 and IFLR1000
Artificial Intelligence is no longer a futuristic concept in India. It is rapidly transforming healthcare, financial services, education, and governance, and has become embedded in the everyday lives of people
India Technology

Introduction

Artificial Intelligence is no longer a futuristic concept in India. It is rapidly transforming healthcare, financial services, education, and governance, and has become embedded in the everyday lives of people. However, such mainstream adoption also brings with it a pressing legal dilemma to decide as to who should be held accountable when AI systems malfunction or cause irreparable harm? Unlike traditional technologies, AI systems are characterized by autonomous decision making, which often makes it difficult to pinpoint liability in case of any damage.

While India has existing legal frameworks such as the Digital Personal Data Protection (DPDP) Act, 2023, the Information Technology Act, 2000, and the Consumer Protection Act, 2019, these instruments only provide partial safeguards and are not designed to address the unique challenges posed by AI. Recognising these gaps, the Government of India has recently launched the IndiaAI Mission and announced the creation of an AI Safety Institute to develop standards for safe and ethical AI. This article explores the current liability framework in India, the gaps that exist, and possible pathways to create a forward-looking AI liability regime.

The Current Legal Landscape in India

India's existing regulatory framework offers only fragmented coverage for issues that may arise in connection with AI. The DPDP Act, 2023 provides safeguards for privacy and the misuse of personal data, but it does not directly address the liability of AI systems that act independently. Similarly, the Information Technology Act, 2000 and the rules framed under it, including those concerning the Indian Computer Emergency Response Team (CERT-In), are designed to address cybersecurity breaches, but they are not equipped to deal with AI-driven harms where decisions are taken autonomously without direct human intervention. The Consumer Protection Act, 2019 permits claims against defective products or deficient services, and in principle, could be extended to cover AI tools, but it does not answer the critical question of whether responsibility lies with the developer, the deploying company, or the end-user. In certain sectors, such as healthcare, the Medical Device Rules, 2017 provide regulatory oversight over AI-enabled diagnostic tools, but these too fail to assign clear liability in the event of harm caused by such devices.

In short, India is still relying on frameworks created for human-controlled or static systems, and these legal instruments do not address the unique challenges posed by autonomous and opaque AI systems.

The Liability Puzzle: Who's Responsible?

The most difficult question in AI governance is that of liability. Consider a scenario where an AI-enabled diagnostic tool produces an incorrect medical result, harming a patient. Should liability rest with the software developer who created the algorithm, the hospital that deployed it, or the clinician who relied on it? The problem is further complicated by the "black box" nature of AI systems, where even the developers themselves may not fully understand how the AI reached a particular conclusion.

Globally, different jurisdictions have begun to grapple with these questions in different ways. The European Union's AI Act (2024) adopts a precautionary approach by imposing a strict liability model for high-risk AI systems. Under this regime, providers of such systems are held accountable regardless of fault whenever harm arises. The United States, by contrast, has adopted a more fragmented, sector-based approach, where liability rules differ across industries such as healthcare, finance, and autonomous vehicles. In India, however, the courts have not yet had the opportunity to deal with AI-specific liability cases. When faced with similar issues, Indian courts have so far relied on analogies drawn from existing principles of product liability, negligence under the IT Act, and constitutional jurisprudence such as the Puttaswamy judgment on privacy.

This raises the critical policy choice for India: should it move towards a strict liability regime that prioritises consumer safety and accountability, or should it continue with a fault-based system that protects innovation and emerging startups but risks leaving victims without adequate remedies?

IndiaAI Mission and the AI Safety Institute

The IndiaAI Mission, launched in March 2024, represents the government's effort to position India as a global hub for AI innovation. The Mission aims to democratise access to computing infrastructure, improve the quality of data available for AI training, and promote responsible and ethical AI practices. As part of this initiative, the government also announced the establishment of an AI Safety Institute (AISI) in January 2025. The Institute is tasked with developing risk assessment frameworks, creating standards for safe AI deployment, and advising policymakers and regulators on AI-related issues.

The AISI is expected to play a critical role in developing mechanisms such as risk classification of AI applications, certification of high-risk systems such as those used in healthcare and finance, and providing expert input to the judiciary when disputes arise. It could also become a key advisory body to both government and industry in promoting fairness, transparency, and accountability in AI. However, without a binding legislative framework, there is a danger that the Institute may remain a policy think tank with limited enforceability, rather than a body with real authority to shape liability rules.

Legal Challenges Ahead

Moving forward, India faces several pressing legal challenges in shaping its AI liability regime. The first is accountability: should developers of AI systems be treated like manufacturers of physical goods, and therefore be held liable whenever their product malfunctions? The second is bias and discrimination, as AI algorithms often reflect or even amplify societal biases present in their training data. Indian courts may need to test such algorithmic unfairness against constitutional guarantees such as Article 14 (right to equality) and Article 21 (right to life, dignity, and privacy). A third challenge lies in cross-border AI models, as many widely used AI systems are imported into India through software-as-a-service arrangements, raising complex jurisdictional questions about liability.

Contractual arrangements also pose difficulties. Many companies seek to limit their liability by inserting indemnity or disclaimer clauses in their contracts with users. Whether such clauses would be enforceable in India, especially in consumer protection contexts, remains unclear. Finally, India must ensure that its emerging AI framework is harmonised with global regimes such as the EU AI Act. Without such harmonisation, Indian businesses may face conflicting compliance obligations that could impede their participation in global AI markets.

The Way Forward and Recommendations

To bridge these gaps, India should consider enacting dedicated AI legislation, perhaps in the form of an AI Liability Act, which lays down clear and predictable rules for assigning responsibility, particularly in the case of high-risk AI systems. Alongside, the creation of regulatory sandboxes could allow startups and innovators to test their AI systems in controlled environments, with limited liability, thereby balancing innovation with accountability. Further, sector-specific guidelines in areas such as healthcare, education, and financial services could ensure that the liability framework is sensitive to the unique risks posed by AI in each sector.

It would also be advisable to encourage the use of alternative dispute resolution mechanisms, such as mediation and arbitration, to resolve AI-related disputes in a cost-effective and time-bound manner, rather than relying solely on traditional court litigation. Ultimately, the legal framework must strike a careful balance between protecting individuals from harm and ensuring that innovation in AI is not stifled by excessive regulation.

Conclusion

India is at a crucial juncture in its AI journey. While existing legal frameworks such as the DPDP Act, IT Act, and Consumer Protection Act offer partial protection, they are not designed to deal with the unique challenges posed by autonomous AI systems. The IndiaAI Mission and the AI Safety Institute represent important first steps, but unless accompanied by binding legislation, they cannot fill the gaps in liability rules. This moment presents India with a historic opportunity: to design an AI liability regime that not only protects citizens and consumers but also fosters innovation. If India acts proactively, it can emerge not just as a rule-taker but as a rule-maker in the global conversation on responsible and resilient AI governance.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More