- within Technology topic(s)
- in United States
- with readers working within the Utilities and Law Firm industries
- within Technology, Government, Public Sector and Energy and Natural Resources topic(s)
- with Senior Company Executives, HR and Finance and Tax Executives
The rapid advancement of generative artificial intelligence has profoundly transformed the processes of content creation, consumption, and distribution. Deepfakes, which are hyper-realistic synthetic audio-visual media, now present significant threats to personal privacy, political stability, consumer protection, and corporate integrity. In 2025, India undertook a crucial regulatory initiative by proposing amendments to the Information Technology (Intermediary Guidelines & Digital Media Ethics Code) Rules, 2021 ("IT Rules"), with a specific focus on "synthetically generated information." This legislative measure seeks to align its regulatory approach with emerging international standards.
1. Why Is the Regulation of Deepfakes Essential?
India's vast digital ecosystem, characterized by nearly a billion users, mobile-first accessibility, multilingual capabilities, and extensive connectivity, has become particularly susceptible to manipulated and AI-generated content. In the preceding year, the proliferation of harms associated with deepfakes, such as impersonation scams, altered political videos, fabricated endorsements from celebrities, and non-consensual intimate imagery, has increased markedly, revealing deficiencies in platform accountability and user protection.
In response, the Ministry of Electronics and Information Technology (MeitY) has revised the 2021 IT Rules to establish a more structured framework for the governance of synthetic media. The amendments emphasize three primary objectives:
- Transparency, ensuring users can recognize AI-generated or altered content;
- Traceability, facilitating the identification of the source and modifications of a deepfake;
- Accountability, imposing enforceable responsibilities on both intermediaries and users.
Collectively, these modifications signify a transition from advisory-based guidelines to a binding regulatory framework designed to enhance the integrity of India's digital ecosystem.
2. Overview of India's Proposed Framework
2.1 Defining Synthetic Content: The preliminary regulations present a comprehensive and precise definition of "synthetically generated information," which encompasses any content that is artificially or algorithmically produced or modified. This encompasses deepfakes, AI-generated visuals and audio-visuals, replicated audio, and synthetic voice outputs, thereby ensuring that the regulatory framework captures the entire range of contemporary generative-AI alterations.
2.2 Mandatory Labelling Requirements: A significant advancement is the quantified disclosure stipulation. Visual media must exhibit a label occupying no less than 10% of the screen area, while audio media must feature a disclosure lasting at least 10% of its total duration. Such specificity uncommon in international frameworks enhances consistency and enforceability, although it simultaneously raises practical issues regarding its implementation across various formats, including vertically oriented videos, short-form clips, and mixed-media content.
2.3 Embedded Metadata and Provenance Indicators: Content intermediaries are required to integrate persistent metadata or distinctive provenance markers that indicate whether the content is synthetic or altered. This aligns India's approach with emerging global standards such as C2PA and EU-supported authenticity frameworks, aiming to foster long-term traceability across diverse platforms.
2.4 Uploader Disclosures and Platform Verification: Content creators must explicitly indicate whether the uploaded material is AI-generated. In turn, platforms are obligated to implement reasonable verification mechanisms, such as automated detection tools or pattern-recognition systems, to validate the accuracy of such declarations. This dual responsibility distributes liability between creators and intermediaries while reinforcing the gatekeeping role of platforms.
2.5 Safe-Harbour Exposure: Failure to comply with these requirements may jeopardize intermediaries' safe-harbour protections under Section 79 of the IT Act. By extending due diligence, content removal, and grievance redressal obligations to synthetic media, the proposed regulations significantly elevate the compliance risks for platforms functioning within India's digital landscape.
3. Comparison With Global Regulatory Standards
India's proposed regulatory framework embodies numerous global trends while simultaneously incorporating unique regulatory features
3.1 Singapore: Singapore's Online Criminal Harms Act (OCHA) unequivocally criminalizes the creation and dissemination of harmful manipulated media and confers upon authorities' specific powers to mitigate electoral interference driven by deepfakes. While India's proposed regulations do not establish new criminal offenses, they are motivated by analogous concerns regarding electoral integrity, public confidence, and the disorder of information.
3.2 United Kingdom: The UK's Online Safety Act imposes a statutory "duty of care" on digital platforms and criminalizes the production and dissemination of non-consensual deepfake intimate imagery. India adopts a similar methodology by enhancing intermediary responsibilities, particularly for larger platforms, without instituting equivalent criminal provisions.
3.3 United States: The United States presently lacks a cohesive federal framework, instead depending on state-level legislation regarding election deepfakes and enforcement actions by the Federal Trade Commission targeting misleading AI-enabled endorsements. India's rules-based, nationwide framework offers greater uniformity but may encounter more intricate challenges in implementation and enforcement.
3.4 Australia: Australia's eSafety Commissioner has embraced a harm-centric strategy towards AI-generated content, highlighting a layered risk assessment and prompt response. India's model likewise focuses on harm prevention, yet situates it within a more comprehensive intermediary-liability framework rather than establishing a separate statute for AI governance.
4. Strengths of India's Proposed Rules
4.1 Rapid and Proactive Regulatory Response: By leveraging the existing IT Rules framework rather than waiting for a standalone AI statute, India has positioned itself to respond more quickly to emerging deepfake-related harms. This agility enables timely intervention in a fast-evolving technological landscape.
4.2 Clear and Measurable Compliance Obligations: The introduction of quantifiable labelling requirements, mandatory metadata embedding, and uploader disclosure obligations creates a set of concrete, operationally clear compliance standards. These measurable benchmarks distinguish India's approach from many global counterparts that rely on more general or principle-based guidance.
4.3 Multi-Layered Accountability Model: The framework distributes responsibility across users, intermediaries, and the underlying technical infrastructure, establishing a holistic and vertically integrated compliance structure. This layered approach reduces over-reliance on any single actor and improves the overall robustness of regulatory enforcement.
4.4 Alignment With International Regulatory Trends: India's emphasis on transparency, provenance, and intermediary responsibility aligns closely with global developments in AI governance. This harmonisation supports consistent compliance practices for multinational technology companies and facilitates interoperability with emerging global standards.
5. Challenges and Gaps
While the draft rules represent an important regulatory step, several issues merit closer examination and refinement.
5.1 Potentially Overbroad Scope: The expansive definition of synthetic content may inadvertently capture benign or creative applications of generative AI, such as artistic renderings, animation, satire, AR filters, and minor AI-driven enhancements. Without clear exemptions, the framework risks over-regulating routine digital expression.
5.2 Limited Enforcement Capacity: India currently lacks a specialised AI-forensics capability or a dedicated digital-content regulator equipped to audit provenance metadata, verify manipulations, or monitor compliance in real time. Effective enforcement will require significant institutional investment.
5.3 Operational Burden on Intermediaries: The requirement to verify user disclosures will necessitate reliance on automated detection and pattern-recognition tools, many of which remain imperfect particularly in India's linguistically diverse and high-volume content environment. This may strain platform resources and increase error rates.
5.4 Risk of Excessive Moderation: The possibility of losing safe-harbour protection may incentivise intermediaries to adopt overly cautious takedown practices. Such defensive moderation could adversely impact lawful speech, satire, political expression, and creative content, raising freedom-of-expression concerns.
5.5 Insufficient User Awareness: Even prominent labels such as the mandated 10% screen or duration disclosure may not be meaningfully interpreted by new or low-literacy internet users. Without parallel public-awareness efforts, the effectiveness of labelling obligations may remain limited.
5.6 Disproportionate Impact on Startups: Early-stage AI and digital-media companies may struggle to meet metadata, provenance, and verification requirements, facing higher compliance costs and technical barriers. This could inadvertently discourage innovation and tilt the market in favour of larger incumbents.
6. Implementation Challenges Unique to India
India's digital landscape exhibits numerous structural intricacies that set its regulatory framework apart from those of other jurisdictions. The vast volume of content generated and disseminated across various platforms renders manual verification impractical, thereby necessitating dependence on automated systems that are still in the process of development. The nation's linguistic diversity further complicates detection and labeling initiatives, as artificial intelligence tools must function reliably across a multitude of languages, dialects, and cultural contexts.
These challenges are exacerbated during periods of intensified political engagement, when deepfakes can profoundly affect public sentiment amidst existing polarization and entrenched misinformation cycles. Furthermore, India's governance structure remains disjointed, with overlapping responsibilities stemming from the DPDP Act, the IT Rules, CERT-In directives, and emerging AI-specific regulations. This proliferation of regulatory frameworks poses a risk of inducing uncertainty and duplicative compliance obligations.
Thus, the successful execution of the draft framework will necessitate coordinated oversight, investment in institutional capacity, including AI forensic and content-authentication infrastructure, and sustained collaboration among governmental bodies, industry players, and civil society participants.
7. The Road Ahead: Strategic Improvements Needed
To ensure that the draft rules effectively mitigate deepfake-related harms without unduly burdening innovation or legitimate expression, several refinements may be warranted.
7.1 Adoption of a Risk-Based Categorisation Framework: Distinguishing clearly between high-risk synthetic content such as impersonation frauds, political manipulation, and non-consensual intimate imagery and low-risk generative content used for artistic, educational, or parodic purposes would enhance regulatory proportionality and prevent overreach.
7.2 Provision of Detailed Implementation Guidance: Platforms will require granular guidance on the practical application of the rules, including interpretation of the 10% labelling requirement across diverse content formats, standards for metadata embedding and retention, differentiation between obligations for small and large intermediaries, and the scope of acceptable verification mechanisms.
7.3 Establishment of Dedicated AI Forensics Infrastructure: Creating a national synthetic-media forensics laboratory or a specialised AI oversight body would significantly strengthen investigative and enforcement capabilities, enabling consistent assessment of provenance and manipulation claims.
7.4 Incorporation of Innovation-Friendly Safeguards: Regulatory sandboxes, phased compliance, or tiered obligations for start-ups and smaller enterprises can help balance the need for robust safeguards with the imperative to maintain a supportive environment for technological innovation.
7.5 Investment in Public Literacy and Awareness Campaigns: For disclosure and labelling requirements to be meaningful, users must understand their significance. Targeted literacy initiatives particularly in vernacular languages and among first-time internet users are essential to ensure informed engagement with synthetic media.
8. Conclusion: A Strong Start Requiring Careful Calibration
India's proposed regulations concerning synthetic media signify a noteworthy and timely advancement in the governance of deepfakes, positioning the nation in alignment with emerging international standards while incorporating unique elements such as measurable labelling thresholds. The success of these regulations, however, will hinge upon the precision of implementation guidelines, the strength of enforcement mechanisms, and the capacity to reconcile regulatory aims with innovation and legitimate expression. For enterprises and digital platforms, adherence to synthetic media regulations has become an immediate and essential component of operational strategy. India has established a commendable foundation, yet careful refinement and continuous institutional capacity enhancement will be vital for the framework to evolve into a globally pertinent and efficient regulatory model.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.