- within Technology topic(s)
- with Inhouse Counsel
- in United States
- with readers working within the Law Firm industries
India’s 2026 amendment to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 has turned deepfake and AI generated content from a policy concern into a concrete safe harbour risk for intermediaries. The IT Rules 2026 establish AI content governance as a primary risk management obligation for digital platforms by creating stricter due diligence procedures for synthetic generated information and establishing non compliance penalties which result in the loss of legal protections. This article examines how the new rules affect intermediary safe harbour, what enhanced due diligence now looks like in practice, and what in house legal and compliance teams should prioritise.
1. Safe Harbour Under The IT Act - What Is At Stake?
The IT Act Section 79 grants intermediaries protection from legal responsibility for third party content which they transmit or store on their platforms as long as they follow their required due diligence procedures and comply with valid governmental orders. The Indian judiciary system has established that this safe harbour protection depends on intermediaries fulfilling their duty to prevent unlawful content from being shared while they seek immunity from responsibility for their actions. The 2026 amendment preserves Section 79 as it currently exists while establishing a new standard for safe harbour protection through its revised definition of due diligence requirements related to SGI and deepfakes.
2. How The IT Rules 2026 Change Intermediary Due Diligence
A. Structured due diligence for SGI enabling intermediaries
The Amendment Rules establish a systematic due diligence process which intermediaries must follow when handling computer resources that enable users to create and modify and distribute synthetically generated content. The intermediaries need to implement "reasonable and appropriate technical measures" which will stop SGI that violates applicable laws according to their previous requirements which only needed them to remove content after it was published. In practice, this means SGI enabling platforms such as generative AI tools, editing applications and content creation suites are expected to embed safeguards into product design, not just policies.
B. Mandatory labelling, metadata and provenance for SGI
A key innovation of the 2026 framework is the requirement that synthetically generated information be clearly labelled and embedded with permanent metadata or similar provenance mechanisms, including unique identifiers. The rules indicate that intermediaries should ensure such labels or metadata cannot be easily removed or suppressed by end users, effectively pushing for tamper resistant signalling that content is AI generated. This pushes platforms toward watermarking, content provenance tools and system level signals that help users, regulators and investigators distinguish synthetic media from authentic material.
C. “Knowingly permitting” unlawful SGI and loss of safe harbour
The Amendment Rules expressly provide that if a significant social media intermediary (SSMI) knowingly permits, promotes or fails to act upon unlawful SGI in breach of the Rules or does not act in line with the new SGI specific due diligence obligations, it will be treated as non-compliant with the IT Rules 2021. In such cases, the intermediary faces the possibility of losing safe harbour protection under Section 79 when dealing with that content which results in civil and possible criminal liability. The connection between SGI compliance and safe harbour protection shows that regulators expect platforms to implement technology based content management systems which function as proactive systems instead of their current reactive notice and takedown approach.
3. Impact On Different Categories Of Intermediaries
A. Social media and user generated content platforms
The 2026 regulations change deepfake rules into primary operating rules for social media and user generated content (UGC) platforms. The platforms need to establish specific user SGI policies while they build operational processes that allow them to delete unauthorized deepfakes within shortened timeframes and they need to spend on technologies that will automatically detect and classify AI generated content. Platforms that depend exclusively on human review processes and outdated escalation procedures will find it difficult to prove their enforcement methods meet the standard of "reasonable and appropriate" requirements.
B. Generative AI tools and editing applications
Intermediaries that provide generative AI or editing capabilities occupy a particularly sensitive position because they directly enable the creation of SGI. The system requires users to receive notifications about restricted SGI categories while the system must include labelling and metadata at the moment of creation and it needs to establish protective measures which stop misuse through the system that blocks requests for non consensual intimate images and impersonation cases. For these providers, documenting prompt filtering logic, red team testing and escalation pathways will be critical for defending safe harbour claims.
C. Infrastructure, hosting and cross border platforms
Hosting providers, CDNs, cloud services and foreign platforms offering services into India are also within the ambit of the IT Rules and SGI obligations, even if their role is more infrastructure oriented. While the intensity of expectations may differ, cross border intermediaries are being told that India now expects real time moderation capabilities, local compliance infrastructure and executive level oversight for AI content governance. Failing to localise governance efforts may be framed as evidence of “knowledge” or negligence if unlawful SGI persists on their services.
4. Compliance And Risk Management Checklist
A. Governance and policy design
- Map whether and how your services enable the creation, modification or dissemination of SGI, and classify business units according to risk.
- Update terms of use, community guidelines and internal policies to define SGI and “Prohibited SGI” consistently with the Rules and relevant FAQs.
- Build a written SGI specific due diligence framework that ties into existing IT Rules 2021 compliance, including roles, escalation thresholds and reporting lines to senior management.
B. Technical and product controls
- Implement or upgrade AI content labelling and metadata mechanisms so that synthetic content generated or hosted by the platform carries robust, hard to remove provenance signals.
- Deploy automated detection tools and risk scoring to flag likely deepfakes and unlawful SGI for fast human review, especially when regulatory timelines are as short as three hours.
- Ensure your systems can log and retrieve detailed information about SGI creation, modification and takedown actions to support investigations and safe harbour defence.
C. Notice handling, documentation and oversight
- Create a dedicated intake and triage process for government and law enforcement notices relating to SGI, integrated with your legal and trust and safety teams.
- Maintain comprehensive logs of notices, decisions, takedown timestamps, and rationale to show that the platform acted diligently and within applicable deadlines.
- Proactively brief senior management and the board on SGI related risks, major incidents and readiness gaps, treating safe harbour preservation as a strategic risk topic rather than a purely legal issue.
5. Open Questions And Litigation Risks Around Safe Harbour
Civil society organizations and commentators have expressed concerns that SGI regulations will lead to excessive content blocking and automatic content removal because platforms will use aggressive content blocking practices to protect their safe harbour status. The court system will need to resolve the legal boundaries which define "knowledge" and "knowingly permitting" unlawful SGI because intermediaries implement probabilistic AI systems that sometimes misidentify content. The victims of deepfake abuse and regulators will present their case that organizations who fail to use current detection tools have committed negligence.
Courts must find a way to balance SGI specific due diligence requirements together with constitutional rights that protect speech and privacy. The determination of platform responsibilities will require answers to three specific questions which include SGI origin traceability requirements, the appropriate duration for metadata retention and the assessment of whether extensive labelling and logging needs to be implemented across all situations. Early jurisprudence on these issues will heavily influence the practical contours of safe harbour for AI related content in India over the next few years.
6. Key Takeaways For Legal And Compliance Teams
The IT Rules 2026 create new rules for intermediary safe harbour through their connection between platform protection and their methods of handling synthetically produced content. Intermediaries who handle synthetically generated information now need to establish complete control systems which require active design and documentation processes throughout their content management operations instead of merely following takedown notice procedures. Legal and compliance teams should therefore prioritise SGI mapping, governance updates, technical safeguards and executive level oversight as part of their 2026–27 risk mitigation roadmap.
7. Practical FAQs On Safe Harbour And AI Generated Content
Q. Does complying with the IT Rules 2026 guarantee safe harbour?
Intermediaries need to follow all IT Rules 2026 requirements because this rule creates a minimal requirement for them to establish safe harbour protection. Intermediaries will still face legal responsibility when courts determine they possessed actual knowledge about illegal SGI and chose not to take action. The safe harbour defence of an intermediary will receive major improvements through a strong SGI due diligence system that includes proper documentation and technical safeguards.
Q. Are foreign platforms and AI providers treated differently?
The Rules apply to intermediaries offering services in India, regardless of where they are incorporated. Global platforms and AI providers must establish compliance with local SGI obligations for their Indian operations which include labelling requirements and notice handling procedures and takedown timelines to avoid facing enforcement actions and safe harbour challenges.
Q. How far must intermediaries go in detecting deepfakes?
The Rules talk about “reasonable and appropriate” technical measures, which will be interpreted in light of an intermediary’s scale, capabilities and risk profile. Large or SGI focused platforms will be expected to do more through automated tools, watermarking and human review than smaller, low risk services.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.