- within Technology topic(s)
- with Senior Company Executives, HR and Finance and Tax Executives
- in European Union
- in European Union
- in European Union
- in European Union
- in European Union
- in European Union
- with readers working within the Basic Industries, Chemicals and Oil & Gas industries
There have been several instances of misinformation, impersonation, and online harm stemming from Artificial Intelligence (AI)-generated content such as deepfakes, including high-profile incidents involving Ratan Tata, Amitabh Bachchan, and Rashmika Mandana. With a view to strengthening the country's regulatory framework for addressing these aspects, the Ministry of Electronics and Information Technology (MeitY) recently took 2 significant steps:
- Amendment to Rule 3(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Rules) to update the process for intermediaries to remove or disable access to unlawful content upon receiving valid takedown directions.
- Proposed amendment to the due diligence framework under the Rules aimed at regulating AI-generated content.
Recently, the issue of deepfakes and AI-generated misinformation came into the spotlight when Sadhguru and the Isha Foundation sought legal intervention against AI-generated deepfake videos and advertisements falsely showing them endorsing certain products. Subsequently, in the matter of Sadhguru Jagadish Vasudev v. Igor Isakov,1 the Delhi High Court directed several intermediaries, including Google LLC, to proactively remove such manipulated content and adopt technological measures to prevent its recurrence. The order underscored the urgent need for stronger legal mechanisms to deal with deepfake-based misinformation, influencing MeitY's decision to tighten intermediary obligations.
The notified amendment to Rule 3(1)(d)
Rule 3(1)(d) of the Rules governs the process by which intermediaries such as social-media platforms, search engines, and messaging services are required to remove or disable unlawful content. The latest amendment by MeitY brings in several structural reforms:
- The Rule earlier mandated intermediaries to act within 36 hours of obtaining 'actual knowledge' through a Court order or a government 'notification', without specifying the rank of the officer issuing such directions or providing any mechanism for review or accountability. Post the amendment, the takedown intimations must be issued by a senior government officer, not below the rank of Joint Secretary, or, in the case of law-enforcement agencies, by an officer at least at the rank of Deputy Inspector General of Police.
- Each direction must be a reasoned, written order specifying the statutory basis, the nature of the unlawful act, and the precise URL or digital location of the offending content.
- To ensure oversight, the amendment further provides that all such orders will be subject to a monthly review by a Secretary-level officer to examine whether they remain necessary and proportionate.
Although intermediaries continue to be bound by the 36-hour compliance window, the process now incorporates a higher degree of transparency and accountability. At the same time, the amendment modifies the earlier 'Good Samaritan' protection, which allowed intermediaries to voluntarily remove content without losing their safe-harbour immunity under Section 79 of the Information Technology Act, 2000 (Act). The omission or narrowing of this protection could have a chilling effect on proactive moderation, since platforms may fear legal exposure even when acting in good faith.
The proposed amendment to regulate AI-generated content
Alongside the notified amendment, MeitY has proposed to add certain provisions to the Rules that specifically address the dissemination of AI-generated misinformation (Draft Rules). The Draft Rules introduce the concept of 'Synthetically Generated Information (SGI)' under Rule 2(1), defining it as any information artificially or algorithmically created or modified using a computer resource in a manner that makes it appear authentic or true. Further, references to 'information' across key provisions of the Rules, including those relating to unlawful acts and due diligence obligations, would encompass SGI unless the context suggests otherwise.
The Draft Rules propose a series of additional obligations for intermediaries:
- Any intermediary that provides tools or computer resources enabling users to create or alter information must ensure that such content is permanently marked or labelled to indicate that it has been synthetically generated. For visual content, this label must cover at least 10% of its surface area, and for audio, it must appear during the initial 10% of the clip. Intermediaries are prohibited from permitting the removal or suppression of this identifier.
- Significant social-media intermediaries (social media intermediaries with 50 lakh or more registered users) are required to mandate that users declare whether the content they upload is synthetically generated and must deploy 'reasonable and appropriate technical measures' to verify the accuracy of such declarations.
- If a platform knowingly allows publication of AI-generated content in violation of the Draft Rules, or fails to act after becoming aware of its synthetic nature, it would be deemed to have failed its due diligence obligations. However, as a safeguard, intermediaries will not lose their statutory immunity if they, in good faith, remove or disable access to synthetic content based on user complaints or their own detection mechanisms.
Creating a robust regulatory framework
Even though the notified amendment and the Draft Rules operate in different domains, they are closely connected – while the notified amendment strengthens procedural safeguards and accountability for all takedown requests, the Draft Rules extend the scope of content regulation to the new realm of AI-generated material. These frameworks will intersect whenever synthetic media is alleged to violate the law, for example, where a deepfake video threatens public order or defames an individual. In such cases, takedown orders under amended Rule 3(1)(d) will apply in conjunction with the obligations under the synthetic-content framework set out in the Draft Rules.
As the deliberation on steps to optimally regulate SGI and related content continues, the envisaged regulatory framework in India should address several key gap areas:
- The broad scope and vague technical standards in the Draft Rules could result in over-compliance and stifle legitimate creativity.
- The cost of implementing watermarking or verification systems may burden smaller platforms, and the removal of the earlier voluntary-removal protection under Rule 3(1)(d) may deter proactive moderation despite the inclusion of an explicit safeguard for the removal of harmful SGI.
- Experts caution that the expanded powers could enable content blocking outside the safeguards of Section 69A of the Act, inviting future constitutional scrutiny on grounds of proportionality and free expression.
- Without clear benchmarks, the ambiguity surrounding the requirement for 'reasonable and appropriate technical measures' may be misused by intermediaries for intrusive monitoring or automated screening to avoid liability. Such measures could raise privacy issues and may result in the over-removal of legitimate content.
- It is debatable whether generative-AI platforms qualify as 'intermediaries' under the Act, since they do not merely transmit or store information but actively generate new content. If they fall outside the statutory definition, the Rules may not effectively capture the very entities responsible for creating synthetic media.
The new framework reflects the government's growing awareness of the evolving digital landscape and the risks posed by synthetic media as well as the proliferation of AI-generated images and videos from platforms such as OpenAI and Gemini AI that can distort reality, spread misinformation, or damage reputations, in addition to sparking disputes and litigation around intellectual property rights, authorship, ownership, and misuse of creative works generated through AI, etc. The notified amendment and the Draft Rules are a welcome step toward ensuring content authenticity and user protection, and should help align India's data protection regime with global standards.
Footnote
1 CS(COMM) 578/2025
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.