- with readers working within the Banking & Credit industries
- within Environment, Tax and Employment and HR topic(s)
On October 22, 2025, the Ministry of Electronics and Information Technology ("MeitY") issued the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 ("2025 Amendment"). This notification amended Rule 3(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 ("IT Rules 2021"), which outlines the process for government bodies to mandate that intermediaries such as online platforms like social media, messaging services, and search engines, remove or restrict access to unlawful content. The 2025 Amendment, which takes effect on November 15, 2025, seeks to tackle growing issues related to synthetically generated material (including deepfakes and AI-created content), by strengthening the due diligence obligations under the IT Act, 2000. It also introduces additional safeguards to ensure that decisions regarding content removal are carried out appropriately, transparently, and with accountability.
The timing of this amendment is particularly noteworthy, as it follows soon after the Karnataka high court's decision in X Corp. v. Union of India, 2025 SCC OnLine Kar 19584. In that ruling, the Court upheld Rule 3(1)(d) and validated the government's Sahyog portal, which facilitates content removal and data requests, despite arguments that the portal enabled an unauthorized takedown mechanism lacking adequate safeguards under the IT Act.
Comparison of Rule 3(1)(d) before and after the 2025 Amendment
Under the erstwhile Rule 3(1)(d), intermediaries were obligated to remove or block access to unlawful content within 36 hours of acquiring "actual knowledge" through either a court order or a notice from the "appropriate government." As defined under Section 2(1)(e) of the IT Act, the term "appropriate government" encompasses both the central and state Governments, thereby authorizing takedown directions from either level. The third proviso further provided a good samaritan-type protection ("Good Samaritan Proviso"), akin to Section 230(c)(2) of the of the US Communications Decency Act of 1996, but narrower in scope, permitting intermediaries to take down prohibited content under Rule 3(1)(b) or pursuant to user complaints without forfeiting their safe harbour protection under Section 79 of the IT Act.
The 2025 Amendment restructures the procedure for content takedown while retaining the 36-hour compliance period for intermediaries. Under the revised framework, takedown directions may only be issued by senior officials holding a rank not below that of joint secretary or its equivalent, or, if such an officer is unavailable, by a director or an officer of similar standing. For police departments, this authority is limited to officials of at least the rank of deputy inspector general. Each government or authorized agency must also designate a single officer to serve as the primary point of contact for issuing takedown orders.
All takedown orders must now be reasoned, clearly setting out the legal basis, relevant legislative provision, and the specific URL or digital identifier of the unlawful content. Additionally, a secretary-level officer is required to conduct a monthly review to assess whether these directives are justified and proportionate.
Procedural reforms introduced under the 2025 Amendment
The 2025 Amendment enhances procedural transparency under Rule 3(1)(d) by explicitly recognizing the authority of state governments and law enforcement agencies to issue takedown directives under the IT Act. It also requires the appointment of authorized officers, allowing intermediaries to confirm the authenticity of such requests. Moreover, substituting the ambiguous term "notification" with "reasoned intimation" represents a significant improvement, as it compels each takedown order to include its legal basis and justification, reinforcing that these are administrative actions supported by reasoned decision-making. The amendment also redefines Synthetically Generated Information ("SGI") as "information that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that reasonably appears to be genuine or true."
Definitional Ambiguities Surrounding SGI
However, this definition raises concerns, as its reliance on whether the content reasonably appears authentic introduces a degree of subjectivity, creating the risk of misclassification based on user perception rather than objective criteria. Hence, although obvious parodies might not fall within the definition of SGI, forms such as satire, reenactments, or exaggerated content could still be categorized as synthetic, leading to unnecessary notification fatigue among users. In addition, labeling content as SGI purely on the basis of its perceived authenticity could unintentionally convey a misleading sense of accuracy or credibility, even when the material remains unverified or unexamined.
Another concern with the revised definition is that the scope of SGI has become overly expansive, potentially covering more than the usual audio-visual manipulation seen in deepfakes and extending to text-based or routine digital content. For instance, content generated or refined by AI tools such as autocorrect, images modified with filters, or virtual reality media made to look realistic could all fall under this definition. Since most online material is produced or altered using algorithmic tools, this broad interpretation risks misclassifying genuine or lawful content as SGI solely based on how it was created. Such an overreach may go beyond the draft amendment's intent and trigger unintended regulatory implications. While the amendment does advance procedural precision, it still leaves critical issues of due process, natural justice, and transparency unresolved, and introduces new challenges about the autonomy of online platforms.
Persisting Challenges in Ensuring Transparency
The enduring concern around the opacity of Rule 3(1)(d) remains unresolved under the 2025 Amendment. The amendment imposes no requirement to inform affected users or to make takedown directions public. The "reasoned intimations" are meant solely for exchange between government authorities and intermediaries, effectively excluding the general public and impacted individuals from the process. This lack of disclosure weakens the principles of natural justice and the fundamental right to information. Currently, orders are communicated only to intermediaries, deepening the absence of transparency.
This approach contrasts sharply with the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (click here) ("2009 Blocking Rules"), which required reasonable efforts to notify either the affected users or intermediaries.
The secrecy adopted under the present framework also conflicts with the supreme court's decision in Anuradha Bhasin v. Union of India, 2020 3SCC 637, which held that government actions restricting speech must be made public to allow for judicial scrutiny. Additionally, the Karnataka high court's ruling in X Corp. v. Union of India exacerbates the issue. The Court determined that intermediaries do not possess free speech rights under Article 19(1)(a) of the Indian Constitution, despite being the only entities receiving takedown directions under Rule 3(1)(d) and therefore best positioned to contest them. Furthermore, the Karnataka High Court's ruling in X Corp. v. Union of India, 2025 SCC OnLine Kar 19584 exacerbates the issue. The Court determined that intermediaries do not possess free speech rights under Article 19(1)(a) of the Indian Constitution, despite being the only entities receiving takedown directions under Rule 3(1)(d) and therefore best positioned to contest them.
The only redeeming feature, however, is that the Rule 3(1)(d) framework under the 2021 IT Rules does not prohibit intermediaries from publicizing takedown orders or informing the impacted users. This is in contrast to Rule 16 of the 2009 Blocking Rules, which requires the confidentiality of blocking requests and complaints. Thus, the intermediaries may decide to behave transparently in the public interest even while disclosure is still optional.
Procedural safeguards and the absence of independent oversight
The lack of procedural safeguards under Rule 3(1)(d) continues to be a significant concern. The 2009 Blocking Rules, which remain in force alongside Section 69A of the IT Act, provide a more comprehensive framework that includes interdepartmental review, notice, and hearings for affected parties, as well as the requirement for reasoned orders.
In contrast, the amended Rule 3(1)(d) restricts oversight to a monthly review conducted by a secretary-level officer within the same authority that issued the takedown order. Notably, the updated provision omits any requirement for a pre-decisional hearing, thereby weakening affected user's ability to contest arbitrary takedowns. This structure blurs the line between the decision-maker and the reviewer, while offering no mechanism to restore content that has been removed in error. As a result, oversight of takedown actions remains concentrated within the same administrative bodies that enforce them, leaving Rule 3(1)(d)'s procedural and accountability gaps unaddressed. Thus, while the recent amendment enhances administrative formality, it falls short of ensuring procedural fairness and independent review.
Challenges surrounding Platform Autonomy and Good Samaritan Protection
The removal of the Good Samaritan Proviso under the amendment introduces a new regulatory challenge. Previously, this clause protected intermediaries that voluntarily acted to regulate unlawful content under Rule 3(1)(b), acknowledging their autonomy to manage harmful material and uphold decency on their platforms. Its omission now raises important questions about whether intermediaries could be held liable under Section 79(2) for exercising voluntary moderation.
However, this omission appears inadvertent rather than a deliberate policy shift, particularly since MeitY's draft deepfake regulation, released alongside the 2025 Amendment, continues to include Good Samaritan-style protections for intermediaries acting in good faith to remove synthetic content. Nevertheless, the inconsistency between the two frameworks generates regulatory ambiguity that may discourage proactive moderation, highlighting the need for MeitY to clarify its position to preserve platform autonomy and ensure responsible online content management.
Conclusion
The 2025 Amendment represents a constructive move towards procedural clarity by defining the competent authority, introducing the requirement for reasoned intimations, and ensuring that takedown directives correspond to specific and identifiable content. By mandating detailed and reasoned intimations, the intermediaries will have better guidance to comply with the law. However, gaps in transparency and due process persist, and new concerns regarding platform autonomy have emerged. The framework could be significantly strengthened by enhancing disclosure mechanisms, establishing independent oversight, and promoting good faith moderation, thereby aligning more closely with the principles of procedural fairness and accountability within India's digital governance system.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.