ARTICLE
19 December 2025

Draft 2025 Amendment: Criminal Law Exposure Of Intermediaries In Fight Against Deep Fakes

KC
Khaitan & Co LLP

Contributor

  • A leading full-service law firm with over 560 professionals with Pan-India coverage through offices in Mumbai, Delhi, Bengaluru and Kolkata
  • Lawyers and trusted advisors to leading business houses, multinational corporations, global investors, financial institutions, governments and international law firms
  • Responsive and relationship driven approach to client service on critical issues and along the business life cycle
  • Specialists with deep sector, domain and jurisdictional knowledge to provide effective business solutions
The commands are then executed by AI to produce the desired results, which could include creation of fake videos, audio clips, original written content, etc. based on the nature of the users' requests.
India Technology
Ishan Khanna’s articles from Khaitan & Co LLP are most popular:
  • within Technology topic(s)
  • with readers working within the Law Firm industries
Khaitan & Co LLP are most popular:
  • within Strategy, Privacy and Real Estate and Construction topic(s)

Artificial Intelligence ('AI') is a revolutionary breakthrough in technology, which has forced people to reimagine their lives and how to approach different tasks. Online platforms have created programs/tools, which are user friendly and allow users to enter written commands in plain language. The commands are then executed by AI to produce the desired results, which could include creation of fake videos, audio clips, original written content, etc. based on the nature of the users' requests.

The above technology has revolutionised various industries, where different complicated tasks which earlier required extensive human application can now be completed with relative ease, in a much shorter span of time. However, considering the potency of this AI technology, the same comes with significant possibility of misuse. For example, miscreants can create fake videos which appear to showcase seemingly true events being recorded by a camera, by giving appropriate commands to AI tools. These videos (often called deep fakes) can then be circulated on social media to cause major mischief. Similarly, audio clips emulating the voice of famous celebrities/politicians may be artificially created and used maliciously.

While any miscreants indulging in the above acts would be liable for penal consequences, tracing such miscreants is often difficult. Resultantly, the Government recently felt compelled to increase the scope of criminal liability stemming from illegal use of deep fakes and extended such liability to various kinds of websites, in order to increase accountability of players operating in the cyber space.

Recently, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 ('the 2021 Guidelines') are proposed to be amended via a proposed amendment of 2025 ('the 2025 Amendment'), to incorporate important changes. The idea of this amendment appears to be to prescribe additional liabilities under the law upon:

  • persons operating online platforms which give their users access to AI tools (for example, ChatGPT),
  • social media websites having a large numbers of users which allow users to post AI content on their websites/online platforms.

By way of this article, we aim to capture the changes introduced by the 2025 Amendment, and the resultant change in criminal liability upon website owners, which may arise from abuse of AI.

A. Information Technology Act, 2000 and the concept of 'Intermediary'

The Information Technology Act, 2000 ("the IT Act/the Act") prescribes the statutory framework to regulate activities in the cyber space, which includes all activities taking place, inter alia, on the world wide web. The scope of the IT Act defines certain offences committed using 'computer systems/networks', which are broadly defined terms and include any/all devices capable of being used to create electronic data.

In order to ensure better regulation of cyber space and to prevent any illegal activities, the IT Act prescribes specific obligations on entities dealing with electronic data (self-made or third party) on the internet. Such entities have been collectively defined as 'intermediaries', the IT Act definition of which is reproduced hereunder for ease of reference:

Section 2(1)(w) – 'intermediary':

"any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record. This wide definition includes telecom service providers, network service providers, internet service providers, web-hosting providers, search engines, online payment sites, online marketplaces, cyber cafés",

While persons posting offending content on the internet (which triggers the definition of any offence) are liable to be prosecuted, intermediaries have been extended protection against such prosecution, if the offending data is published on their platform by a third party user.

However, to avail of this 'safe harbour' protection, which is granted under Section 79(2) of the IT Act, the relevant intermediary needs to establish that:

  • they acted only as facilitators, observed the prescribed due diligence standards; and
  • they did not initiate, select, or modify the transmission of the offending content.

However, if the intermediary is made aware of the offending content (by court or Government order) and still fails to take appropriate action, the safe harbour immunity can be withdrawn from it and such intermediary also may be exposed to the same prosecution which the content creator is facing.

The standards of diligence required to satisfy the criteria of Section 79 of the IT Act are prescribed under the 2021 Guidelines, in supersession of the erstwhile Information Technology (Intermediaries Guidelines) Rules, 2011.

B. The 2021 Guidelines and its key features

All entities falling within the definition of 'intermediary' must comply with the 2021 Guidelines, in order to avoid prosecution for third party content posted on their respective online platforms. Certain key features of these Intermediary Guidelines, 2021, which are relevant to the present discussion, are as follows:

  • Due Diligence requirements: Rule 3 prescribes exhaustive due-diligence obligations to be exercised by any intermediary, including social media platforms, in order to avail safe harbour under Section 79 of the Act. These obligations include the duty to clearly publish the website's terms of service, privacy policy, user agreement on the intermediary's website, and specifically informing users not to upload unlawful or harmful content. The 2021 Guidelines also mandate the intermediaries to promptly remove prohibited material, within 36 hours of receiving a court order or a Government notice.
  • Grievance Redressal Mechanism: All intermediaries must set up a grievance redressal system for its users and update it on their website. They must appoint a 'Grievance Officer' to acknowledge complaints within 24 hours and resolve them within 15 days / within 72 hours in urgent situations.
  • Classification of Intermediaries: The 2021 Guidelines distinguished between "social media intermediaries" ("SMIs"), and "significant social media intermediaries" ("SSMIs"), based on user base thresholds. Consequently, any social media intermediary having more than 50 Lac (5 million) users, constitutes a significant social media intermediary1.
  • Additional Duties for SSMIs: The SSMIs are subjected to enhanced compliance obligations. They are required to appoint a Chief Compliance Officer2, a Nodal Contact Person (responsible for 24x7 coordination with law enforcement agencies and officers to ensure compliance with their orders or requisitions seeking data/information), and a Resident Grievance Officer (responsible for maintaining, implementing and operating the grievance redressal mechanism for the intermediary), all residing in India.

C. Introduction to the proposed 2025 Amendment to the 2021 Guidelines

In recent times, there has been a sudden rise in instances of misuse of AI generation tools to create deepfakes and other synthetic media, which may be used for spreading misinformation, damage reputations, manipulate or influence elections or commit financial fraud.

In order to tackle this rising menace, the Ministry of Electronics and Information Technology ("the Ministry"), on 22 October 2025, released the draft 2025 Amendment, inter-alia, proposing additional due diligence for the following classes of intermediaries:

  • online intermediaries which contain AI tools which users can utilise to create 'synthetically generated information'; and
  • Any SSMI on the online platform of which users can post 'synthetically generated information'.

The 2025 Amendment is yet to be notified in the official gazette and had been published on the Ministry's website for public consultation, awaiting public comments/feedback.

D. Key changes in 2025 Amendment which target deep fakes and other SGI

The key changes of the 2025 Amendment relevant to the present discussion are as under:

  • 'Synthetically Generated Information' [Rule 2(1)(wa)]: The term 'Synthetically Generated Information' ("SGI") has been defined as follows:

"information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true"

  • 'Information' to include SGI [Rule 2(1)(A)]: It has been clarified that the existing references to 'information' in the context of committing an unlawful act under the 2021 Guidelines, now include SGI. As a result, the obligation to take reasonable efforts preventing users from posting harmful content under Rule 3(1)(b), removal of content upon government notification or court order under Rule 3(1)(d) and the additional due diligence obligations upon SSMIs under Rule 4 (2) & (4), all extend to cases where the information contains SGI, as well.
  • Due Diligence obligations upon intermediaries offering AI tools [Rule 3(3)]: Every intermediary offering programs/tools to create or modify SGI i.e. tools which use AI to generate data, must ensure that any SGI generated from their platform is labelled or embedded with a permanent unique metadata or identifier ('Label').

Such Label must provide adequate identifiers to help in immediate identification of the content as SGI. Further, the said Label must be incorporated in the following manner in the SGI:

  • covering at least 10% of the surface area of a visual display; or
  • in the case of audio content, during the initial 10% of its duration
  • Enhanced obligations for SSMIs [Rule 4(1A)]: Separate from the obligations upon intermediaries offering AI tools, SSMIs, before permitting publication of any SGI on their platform, must take the following steps:
    • Obtain a user declaration on whether the uploaded information is SGI;
    • Additionally, deploy reasonable and proportionate technical measures to verify such declarations.

If any information is found to be SGI basis any of the above steps, the intermediary also has the duty to ensure that it is only published with appropriate labelling indicating that the information is SGI.

  • Proactive approach towards removal of harmful SGI [Proviso to Rule 3(1)(b)]: The safe harbour immunity (which is available to intermediaries only acting as conduits to third party content and which do not initiate, select, or modify the transmission) remains available to intermediaries even if they take-down any SGI, which is found to be violative of their own terms and conditions or of the law. Hence, taking down offensive content would not amount to such intermediaries acting as modifiers of the content.

E. Criminal law exposure of intermediaries

If any intermediary permitting the use of AI tools by users and/or any SSMI allowing publication of third party content on their platform fails to comply with the above requirement, they can be held to be in breach of the 2021 Guidelines (as proposed to be revised by the 2025 Amendment). Consequently, such intermediaries can lose the 'safe harbour protection' under Section 79, IT Act, and can be forced to face prosecution for any offending SGI published on their platform. By way of example, consider a situation where a disturbing deep-fake video is posted on a major social media website and the same creates unrest in the society as people believe the same to be true. In such a situation, if the social media website owner has not completed the requisite due diligence as prescribed by the 2025 Amendment, he can be liable for prosecution for any offence triggered by publication of the deep-fake video.

However, if the 2021 Guidelines have been duly complied with, the concerned intermediary can avoid prosecution, and the sole party facing liability will be the generator and uploader of the AI content.

F. Key benefits of the 2025 Amendment

The 2025 Amendment may play a pivotal role to further the Ministry's attempt to address issues rooting from SGI, including deepfakes, in the following manner:

  • It aims to enhance the accountability and transparency of intermediaries, including SSMIs hosting SGI (including deepfake or other AI-generated content), to protect citizens from misinformation, impersonation and misuse of AI technologies.
  • This Amendment also proposes to reinforce a favourable framework by continuing to safeguard intermediaries acting in good faith under Section 79(2) of the Act, while simultaneously improving grievance redressal mechanisms for users affected by misleading AI generated content.
  • The mandate of labelling the SGI also empowers users to easily distinguish between authentic and synthetic information, thereby building public trust in information available on the internet.

G. Key areas of concern arising from the 2025 Amendment

The definition of SGI creates uncertainty:

  • A bare reference to the definition of 'SGI' in the 2025 Amendment indicates that such definition does not include all 'artificially created content' but is limited to such AI content which 'reasonably appears to be authentic or true'. To expand this definition, it appears to cover only such SGI which gives an overt impression of resembling reality, which can lead to a reasonable belief that the information is captured from a real source (and is not artificially created).
  • It naturally flows from the above, that simple text based information such as written essays etc. generated using AI may not amount to SGI, for the purposes of the 2025 Amendment. However, the language of the 2025 Amendment does leave room for doubt and confusion.
  • This confusion may lead to a situation where regular day to day usage of AI may become difficult, as simple written text content generated using AI also cannot be used in everyday life, without Labels being incorporated in more than 10% of the written content.
  • This confusion in scope of the term 'SGI' also creates an uncertain position for intermediaries, where determining their exact scope of liability may be difficult.

SSMIs have an additional liability to self-determine deep-fakes

  • In addition to the liability upon AI tools to mark any data generated by them, SSMIs have been prescribed with the burden of using 'reasonable and appropriate technical measures' to identify SGI. The term 'reasonable and appropriate technical measures' is generic and open to wide interpretation. Hence, the exact scope of liability cannot be determined.
  • Various SSMIs may not have the requisite technical abilities to determine which audio/video content constitutes AI.
  • This requirement has only been brought in for SSMIs and other intermediaries have been left out of its scope. This still leaves room for open circulation of deep-fakes on social media websites which have not crossed the user threshold of 5 million users.

H. Impact of the proposed 2025 Amendment on Intermediaries

While the 2025 Amendment aims to protect citizens from AI generated deepfakes, it significantly increases the compliance burden on intermediaries, more specifically, SSMIs. The scope of the term 'SGI' has been left broad, which leaves it open to interpretation. The term may be interpreted restrictively only to include deep-fake videos, audios or images; however, a broad interpretation may be taken to extend the scope of SGI to cover any output from an AI tool (including written text answers, program code etc.) This creates uncertainty in determining the exact compliance required from intermediaries and the resultant liability which may attach to them.

However, the 2025 Amendment specifies that the actions required to be taken by intermediaries will not compromise their safe harbour protection under Section 79, IT Act, which is a welcome call. Further, these changes in law can help in protecting citizens from the menace of misuse of AI and help in letting AI platforms gain public trust by assisting in clearly distinguishing AI generated information (or SGI) from regular information.

With some language changes which help in better defining the scope of intermediary liability and what constitutes SGI, the 2025 Amendment can be a welcome change to prevent misuse of an otherwise powerful development tool – Artificial Intelligence.

Footnotes

1. Notification No. S.O. 942(E) dated 25 February 2021 which can be accessed here.

2. A key managerial person of such SSMI who shall be responsible for ensuring compliance with the Act and rules. In the event of failure, such Chief Compliance Officer shall be liable in any proceedings relating to any relevant third-party information, data or communication link made available or hosted by that intermediary.

The content of this document does not necessarily reflect the views / position of Khaitan & Co but remain solely those of the author(s). For any further queries or follow up, please contact Khaitan & Co at editors@khaitanco.com.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More