ARTICLE
28 January 2026

From Neutral Conduits To Content Architects: Redrawing The Lines Of Intermediary Liability

DL
DSK Legal

Contributor

DSK Legal is known for its integrity, innovative solutions, and pragmatic legal advice, helping clients navigate India’s complex regulatory landscape. With a client-centric approach, we prioritize commercial goals, delivering transparent, time-bound, and cost-effective solutions.

Our diverse and inclusive culture fosters innovative thinking, enabling us to craft exceptional legal strategies. Recognized for excellence, we attract top talent and maintain strong global networks, ensuring seamless support for cross-border matters and reinforcing our position as a trusted legal partner.

The rapid evolution of artificial intelligence ("AI") has profoundly altered the digital landscape, fading the long-standing legal distinctions between content originators, AI developers, and online intermediaries.
India Technology
Rishi Anand’s articles from DSK Legal are most popular:
  • in United States
  • with readers working within the Media & Information, Law Firm and Construction & Engineering industries
DSK Legal are most popular:
  • within Technology, Strategy and Privacy topic(s)

I. Introduction:

The rapid evolution of artificial intelligence ("AI") has profoundly altered the digital landscape, fading the long-standing legal distinctions between content originators, AI developers, and online intermediaries. Statutory safe harbour protections, i.e., legal immunity that shields intermediaries from liability for third-party content that they merely receive, store, transmit, or facilitate on behalf of another person ("Safe Harbour") were crafted for passive hosting, transmission functions and did not contemplate platforms deploying proprietary AI systems capable of autonomous content generation, contextual manipulation, and behavioural influence. The challenge is compounded by the dual-edged nature of AI technologies: while driving efficiency and personalization, they are equally susceptible to misuse for deepfakes, automated fraud, misinformation campaigns, and cyber enabled harms at an unprecedented scale. This convergence marks a decisive inflection point in the intermediary jurisprudence, with platform-integrated AI systems such as Grok raising fresh questions about the continuing availability of Safe Harbour protections. This article examines how AI-driven platforms are blurring the Safe Harbour doctrines, distinguishes between facilitative and integrative AI deployment models, and maps the evolving contours of intermediary liability in the generative AI era.

II. The Bifurcated Analysis: Third-Party AI Versus Proprietary AI

A. Scenario 1: Third-Party AI Hosted on an Intermediary Platform

Where a platform's role is limited to hosting or enabling access to an AI tool that is independently developed, deployed, and operated by a third party, it may continue to retain its status as an "intermediary" under Section 79 of the Information Technology Act, 2000 (" IT Act"). In such arrangements, effective control over training, operation, and content generation vests with the third-party AI developer, thereby preserving the platform's passive intermediary role and its eligibility for statutory Safe Harbour. Direct liability under the applicable laws accordingly rests with the third-party AI provider as the content originator. The platform's liability is limited and arises only upon the acquisition of actual knowledge through a court order or a lawful government notification and its subsequent failure to act expeditiously to remove or disable access to the unlawful content, in compliance with Rule 3 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (" IT Rules, 2021"). This position is consistent with settled jurisprudence, including Shreya Singhal v. Union of India and MySpace Inc. v. Super Cassettes Industries Ltd., which affirms that intermediaries hosting third-party applications retain immunity so long as they act expeditiously to remove or disable access to unlawful material upon receiving actual knowledge through a court order or a notification from a competent authority.

Accordingly, even if intermediary protection is interpreted in its traditional and narrow sense so as to shield platforms from liability for third-party applications, the securities sector reflects a comparatively more stringent approach. The Securities and Exchange Board of India introduced Regulation 16C through the Intermediaries (Amendment) Regulations, 2025, which places sole responsibility on every SEBI-regulated entity for any AI or machine learning tools it deploys, whether developed in-house or sourced externally. This responsibility extends to data privacy and security, the integrity of AI-generated outputs, and compliance with applicable laws.

B. Scenario 2: Proprietary AI Integrated by the Intermediary

By contrast, when a platform develops and integrates its own proprietary AI system ("Proprietary AI") as a core feature, the legal character of the platform may shift from intermediary to publisher or originator of content. In such cases, AI-generated outputs may constitute active initiation, selection, and shaping of information, accompanied by editorial control exercised through choices relating to training datasets, model architecture, filters and guardrails. Such a degree of control may be perceived to be inconsistent with the core premise of Section 79 of the IT Act, and, as a result, undermines the platform's entitlement to Safe Harbour.

The IT Act defines an "intermediary" as an entity that receives, stores, transmits, or provides a service in relation to an electronic record on behalf of another person. A plain reading of this definition makes it clear that intermediary status is confined to activities involving third-party content, limited to the facilitation of receipt, storage, transmission, or access, and does not extend to entities that originate or actively shape the content itself. However, in contrast, when an intermediary acts as anything but a mere conduit, in such circumstances the platform may be potentially exposed to direct liability under the applicable laws with obligations extending beyond reactive takedowns to a proactive duty to prevent unlawful or harmful outputs. In such circumstances, courts are likely to view platforms as publishers, where the usual requirement of "actual knowledge" carries less weight, since platforms deploying Proprietary AI can be assumed to have at least constructive knowledge of their systems' capabilities and foreseeable risks. Inadequate testing, insufficient safeguards, or a failure to anticipate predictable misuse are likely to attract closer regulatory and judicial scrutiny where the resulting harm arises from the operation of the platform's own AI infrastructure.

III. Why This Matters Now: The Grok Controversy as a Regulatory Flashpoint

x.AI, introduced Grok, an AI assistant designed to offer real-time search, image generation and trend analysis. The rollout of Grok raised significant concerns as it enabled users to create and publicly share sexually explicit or obscene visual content of individuals. The Grok controversy underscores the immediate regulatory risks posed by generative AI integrated into large social media platforms as it has the capacity to publicly post these images on X, the associated social media platform. As a result, non-consensual sexualised images can be generated and immediately disseminated to a potentially vast audience which meant that this new capability was immediately harnessed by a large user base through X's platform. The feature was widely misused to generate non-consensual sexualised imagery, particularly targeting women and minors. India is among a handful of governments including the UK, Brazil, Malaysia and the EU that have swiftly responded to this controversy.

The defence taken by x.AI portrays Grok as a neutral tool and shifts responsibility entirely onto users, but this framing ignores the technical reality of generative AI. Generative AI outputs are not created in isolation by user prompts alone as they are shaped by design choices relating to training data, model architecture, and built-in safeguards. Therefore, Grok cannot readily be characterised as a purely passive system; rather, it reflects a series of deliberate design, training, and deployment choices inviting further scrutiny of the human and organisational decisions underlying its outputs and their dissemination on X.

The Grok episode raises a fundamental question under Indian law: can a system that actively generates and amplifies content continue to claim Safe Harbour? In response to widespread public and parliamentary concern, the Ministry of Electronics and Information Technology ("MeitY") issued a letter dated January 2, 2026, directing X to undertake a comprehensive review of Grok's technical design, governance structures, and safety guardrails ("Direction"). Grok admitted to its lapses in oversight and detailed its moderation measures and immediate remedial steps, including content removals and account suspensions. However, MeitY reportedly found the response "inadequate," seeking further clarity on post-facto enforcement as well as robust preventive safeguards embedded within Grok's architecture, data inputs, and filtering mechanisms.

This Direction reaffirmed MeitY's earlier advisory, which required all intermediaries to ensure strict compliance with the IT Act and the IT Rules, 2021, particularly in relation to unlawful and AI-generated content. India has so far adopted a cautious and largely hands-off approach towards deepfakes and non-consensual AI-generated media by introducing draft amendments to the IT Rules which propose enhanced due-diligence obligations on social media intermediaries, including mandatory labelling of AI-generated content and proactive identification and removal without awaiting court or government orders. In recent years, MeitY has consistently emphasised that intermediary Safe Harbour protections are contingent upon strict adherence to takedown and content-governance obligations. The shift of generative AI systems such as Grok from passive facilitation to active content generation and dissemination invites closer examination of the boundaries of existing Safe Harbour frameworks.

IV. CONCLUSION

The rise of generative AI has sharpened the legal divide between platforms that merely facilitate access to third-party AI tools and those that deploy Proprietary AI systems as core features. While the former may continue to claim Safe Harbour under Section 79 of the IT Act by maintaining technical neutrality and complying with due-diligence obligations, the latter increasingly resemble content originators by exercising editorial control through training decisions, model design, and safety guardrails. Recognising this shift, the Indian Government has taken a clear position: platforms deploying such AI systems must strictly comply with the IT Act and the IT Rules and implement robust preventive safeguards, failing which safe-harbour protection may be withdrawn. The emerging position suggests that intermediary immunity in the AI era is increasingly conditional and may not readily extend to platforms that actively generate and amplify unlawful content.

Disclaimer: This article is general in nature and is not intended to be a substitute for specific legal advice. Please contact the author(s) for specific legal advice in this regard.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More