ARTICLE
14 May 2025

The Confluence Of AI And Data Privacy: Aligning Data Privacy Regime In India For The Age Of AI

HS
Hammurabi & Solomon

Contributor

Hammurabi & Solomon Partners, established in 2001 by Dr. Manoj Kumar, ranks among India’s top 15 law firms, offering a client-focused, solutions-driven approach across law, policy, and regulation. With over 16 leading partners and offices in key Indian cities, the firm provides comprehensive legal services, seamlessly guiding clients through the complexities of the Indian legal landscape. Known for quality and innovative problem-solving, H&S Partners is committed to client satisfaction through prompt, tailored counsel and deep sector expertise, impacting both national and international legal frameworks.

A look into India's digital transformation and the current legal landscape addressing data privacy and artificial intelligence.
India Technology

A look into India's digital transformation and the current legal landscape addressing data privacy and artificial intelligence.

The rapid advancements in technology and the growing presence of artificial intelligence have raised legitimate concerns around personal data protection critically than ever. To address these, India introduced the Digital Personal Data Protection (DPDP) Act, 2023, amid the global rise in AI applications and generative AI systems. While DPDP Act aims to safeguard individual privacy by promoting transparency, ensuring accountability in data handling, and giving citizens greater control over their personal information, it also lays down certain regulatory criterions for AI model developers / deployers to consider the scope and processing of personal data, while preserving personal rights and exemptions available to train AI models. Hence, it can be said that by balancing the need for innovation with strong privacy protections, the DPDP Act marks a significant step in regulating emerging technologies like artificial intelligence in a responsible and secure digital ecosystem.

Brief Insights into India's Digital Transformation Landscape

India's digital transformation era, driven by initiatives like Digital India, has embedded technology into everyday life, revolutionizing sectors such as payments, transport, and identity verification. Various Platforms like BHIM, Paytm, FASTag, and biometric-based KYC have improved service delivery and public convenience. In parallel, the government has launched AI-focused initiatives like the National Strategy for Artificial Intelligence by NITI Aayog, the IndiaAI Mission under the Ministry of Electronics and Information Technology (MeitY), and Centers of Excellence (CoEs) to advance AI research and deployment in key sectors. Tools like SUPACE highlight the government's efforts to integrate AI in the judiciary, showcasing its commitment to leveraging AI for the public good.

However, these advancements come with significant challenges regarding data privacy, security, and ethical AI use. The enormous amount of personal data generated and stored, such as names, biometric details, and financial information, are vulnerable to ghastly acts of breaches and misuse. This highlights the urgent need for a robust data protection framework to safeguard individual rights in the age of AI and big data.

While the European Union (EU) AI Act has created a buzz globally, India is addressing the emerging challenges proactively concerning the proliferation of deep fakes and tackling the barriers hampering the steady growth rates of promising start-ups. Apart from issuing constant advisories recommending intermediary platforms to ensure reliable output generation by AI models and keeping the citizens informed of any such prospective hazards, India has also adopted a distinct approach to AI regulation wherein it primarily focuses on harnessing the potential of AI rather than strictly regulating it.

Implied Intersection of Data Privacy and AI:

The DPDP Act does not explicitly mention AI, but its interconnection with AI is evident, as key definitions in the Act are open to interpretation. The exclusion of publicly available personal data enables AI models to process such data without restrictions, facilitating commercial and technical AI applications. Key terminologies depicting the intersection are:

  • Section 2(b) of the Act which defines 'Automated': Refers to digital processes that operate without human input once started and hence impliedly includes within its ambit AI systems that indulges in decision making or predictions, subjecting them to data protection rules like transparency and accountability.
  • Section 2(s) (vii) of the Act which defines 'Artificial juristic person': Introduces the concept of an 'artificial juristic person', legally recognizing non-human entities such as AI-driven companies/models, thereby enabling accountability under the Act.
  • Section 2(x) of the Act which defines 'Processing': Includes any automated operation performed on personal data such as collection, storage, use, or deletion which directly correlates with AI data training and usage.

Key Challenges at the Intersection of AI and Data Privacy

While the DPDP Act provides a foundational legal structure for safeguarding digital personal data, several ambiguities still remain unreciprocated concerning the rise of AI technologies. Few critical areas of concern, wherein both the subject areas interject, exposing potential risks and loopholes, are discussed below:

  1. Section 7 and Public Interest Loopholes: Section 7 allows data processing without consent for "public interest," but the vague term creates loopholes for misuse. For instance, Delhi Police used facial recognition during anti-CAA protests, citing public safety. Such surveillance risks infringing Article 21 (Right to Life and Personal Liberty) by enabling non-consensual tracking and profiling without clear legal safeguards or accountability.
  2. Automated Decision-Making and Accountability Gaps: The Act lacks clarity on AI-driven decision-making and accountability. For instance, Aadhaar-linked biometric failures in welfare schemes like PDS and MGNREGA have led to service denial, disproportionately impacting marginalized groups and raising concerns under Article 14 (Right to Equality), with no clear redressal mechanism for algorithmic discrimination.
  3. Cross-Border Data Transfer Restrictions: Section 17's restrictions on cross-border data transfers and MeitY's push for data localization hinder global AI collaboration and raise compliance costs. For instance, draft policies require companies like Amazon, Google, and Facebook to store Indian user data locally, limiting access to global AI tools and disproportionately impacting startups and developers.
  4. Consent Mechanisms and Transparency Challenges: Complexities surrounding AI systems' often make it challenging for users to provide informed consent. Government platforms like UMANG and MyGov uses AI chatbots but lack transparency regarding data usage, storage, and sharing, undermining the DPDP Act's core principle of informed consent.

Limitations of the IT Act, 2000 and the need for a Digital India Act

The Information Technology Act, 2000, was India's first major digital legislation, introduced to promote e-commerce, combat cybercrime, and legally recognize electronic communications. While operating effectively in the early internet era, it was not designed to address the complexities of AI and modern digital ecosystems. Several Sections of the Act now appear to be obsolete. Section 43A, which mandates compensation for failure to protect sensitive personal data, does not address AI-specific risks. Section 66 penalizes individual cybercrimes but overlooks broader systemic harms like AI-driven misinformation and algorithmic manipulation. The now-repealed Section 66A, struck down in Shreya Singhal v. Union of India for being vague and unconstitutional, leaves a regulatory gap for harmful AI-generated content, deepfakes, and synthetic media. Section 69, which permits surveillance in the name of national security, lacks specific safeguards against intrusive AI technologies like facial recognition and predictive analytics, raising serious privacy concerns under Article 21 of the Constitution. Additionally, Section 79 provides safe harbour to intermediaries but fails to consider the active role AI plays in curating and amplifying content, reducing platform accountability.

In light of existing limitations, there is a clear need for a comprehensive, forward-looking regulatory framework. The upcoming Digital India Act (DIA) is poised to address this gap by introducing robust mechanisms to govern AI, ensure platform accountability, and safeguard user rights, complementing the DPDP Act, 2023. Marking a shift toward a secure and innovation-led digital ecosystem, the DIA is expected to include risk-based classification of AI systems, enhanced duties for digital intermediaries, regulation of synthetic and AI-generated media, and regulatory sandboxes to foster innovation. By embedding transparency and accountability, the DIA seeks to balance technological progress with ethical and legal safeguards.

Way Forward

While aiming to harness and foster the #AIForAll initiative, India can lead globally by balancing innovation with strong safeguards through strategic, forward-looking policy measures such as:

  1. Strengthening Definitions and Regulations for AI: The Act and its upcoming Rules should incorporate clear definitions and governance mechanisms specific to AI, especially around automated decision-making. A dedicated section on AI can help mitigate misuse and legal ambiguities.
  2. Public Interest and Accountability: The 'public interest' clause must be refined to prevent misuse in areas such as AI-enabled surveillance and profiling. Independent oversight and defined parameters are essential to uphold accountability.
  3. Building AI Transparency: As AI becomes more integrated into daily life, organizations should be required to disclose how personal data is collected, processed, and used. Public awareness campaigns can promote informed consent and user trust.
  4. Collaboration for Innovation and Global Standards: India should align with global regulatory standards by fostering international data-sharing frameworks that uphold privacy norms. Harmonizing domestic laws with evolving global standards will address jurisdictional overlaps. Establishing robust, cross-border data-sharing mechanisms and coordinated platforms will support responsible AI research, data protection, and international collaboration.
  5. Implementation of Oversight Mechanisms: An AI Ethics and Oversight Committee should work with the Data Protection Board of India to monitor compliance and address AI-related grievances.

The intersection of AI and the DPDP Act poses legal ambiguities and privacy risks, especially in sensitive sectors. Hence, a forward-looking regulatory framework is essential to address accountability, bias, and surveillance, ensuring ethical AI development aligned with constitutional values and sustainable innovation.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More