ARTICLE
21 November 2025

India's Approach To Regulate Synthetic And AI-generated Content: How It Aligns With Global Deepfake Regulation Frameworks

L
Lexplosion Solutions Private Limited

Contributor

Lexplosion Solutions is a leading Legal-Tech company providing legal risk management solutions in areas of compliance management, audits, contract lifecycle management, litigation management and corporate governance. Lexplosion merges disruptive technology with legal domain expertise to create solutions that have increase efficiency and reduce costs.
As synthetic and AI-generated content is becoming increasingly deceptive and dangerous, regulators across the globe are shifting gears from merely issuing guidance and directives...
Worldwide Compliance
Snigdha Sanganeria’s articles from Lexplosion Solutions Private Limited are most popular:
  • in United States
  • with readers working within the Healthcare and Law Firm industries
Lexplosion Solutions Private Limited are most popular:
  • within Compliance, Tax and Finance and Banking topic(s)
  • with Finance and Tax Executives

As synthetic and AI-generated content is becoming increasingly deceptive and dangerous, regulators across the globe are shifting gears from merely issuing guidance and directives to legislating on legally enforceable obligations such as labelling, instructing takedown notices, election-period restrictions and imposing criminal penalties for intimate deepfakes.

India is joining the global effort with the Ministry of Electronics and Information Technology (MeitY) proposing amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

The proposed amendments once implemented, will mandate visible labels, persistent metadata, uploader declarations and verification obligations, signalling a shift from good to have AI-policies to regulatory compliance requirements. For businesses, this essentially means concrete compliance obligations like declaring synthetic and AI-generated information, labelling synthetic data prominently with permanent unique identifiers and maintaining audit-ready records for inspections. Non-compliance may result in facing fines, regulator directives and reputational damage.

India's Draft IT Amendments

The MeitY published draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 on 22nd October 2025 and has invited feedback and comments from stakeholders till 13th November 2025.

The amendment is aimed at addressing the growing threat of synthetic and AI-generated content, particularly deepfakes requiring Social Media Intermediaries (SMIs) to ensure that any synthetically generated or manipulated content carries visible labels and persistent metadata identifiers. It has introduced minimum visibility thresholds for labels/markers and user declarations where uploaders confirm if content is synthetic. SMIs must also adopt reasonable verification mechanisms, to ensure synthetic media regulations are followed.

Compliance obligations

For intermediaries that enable the creation or modification of synthetic media:

  • Label every synthetic output prominently or embed it with a permanent unique metadata/identifier that enables immediate identification as synthetic.
  • Ensure that label is visible/audible in a prominent manner i.e., at least 10% of surface area for visuals or within the first 10% of audio duration.
  • Prevent modification, suppression or removal of the label or identifier.

For significant social media intermediaries (SSMIs) that display, upload or publish content:

  • Require users to declare whether the content is synthetic before display, upload or publication.
  • Use reasonable and appropriate technical measures (including automated tools) to verify declarations, considering the nature, format and source.
  • Label synthetic content clearly and prominently or display a notice indicating it is synthetic.

These draft amendments mark India's first concrete step towards operational accountability in synthetic media governance. The goal is to make synthetic content transparently identifiable and to align the compliance framework with India's vision to ensure an Open, Safe, Trusted and Accountable Internet for its citizens.

Comparative Global Context

While India's draft is in the consultation stage, other countries, especially Singapore, Australia, UK and USA already have enforceable legal frameworks in place which impose clear operational and disclosure compliance obligations on intermediaries, social media companies and content hosts.

Singapore

Singapore considers the issue of deepfakes as a serious and pressing concern and has enforced several protections against the publication and sharing of deepfakes. The Online Criminal Harms Act (OCHA) which was enacted in July 2023 introduced provisions empowering authorities to issue legally binding directions to platforms to remove or restrict access to harmful or criminal online content, including deepfakes used in scams or impersonations. These provisions came into effect on 1 February 2024. Companies that fail to comply with certain OCHA directions can face fine upto SGD 1 million (with possible daily continuing fines), depending on the type of direction and entity.

Moreover, on 15th October 2024, Singapore amended the Elections (Integrity of Online Advertising) Bill and explicitly banned AI-generated or manipulated depictions that misrepresent a political candidate's actions or statements. Such content would be deemed as illegal, and platforms must block or remove such material during the election period, or they may be fined up to SGD 1 million.

Compliance Takeaways:

  • Develop a dedicated response mechanism for OCHA takedown orders with defined escalation paths.
  • Implement mechanisms to identify and restrict synthetic political content during election windows.
  • Maintain internal workflows to disclose user or account data when directed by regulators.

Australia

Australia has a well-defined system against the creation and sharing of deepfakes. Under the Online Safety Act 2021, the eSafety Commissioner can order the removal of non-consensual intimate images, including deepfake images within 24 hours. If platforms fail to comply with an eSafety removal notice, they can face significant financial penalties upto 2,500 penalty units.

In 2024, the Criminal Code was amended to make, creating, distributing or possessing deepfake sexual material a criminal offence, punishable by imprisonment.

Compliance Takeaways:

  • Develop a 24-hour takedown process for synthetic sexual or intimate content.
  • Implement processes to block re-uploads of harmful or abusive material.
  • Maintain detailed incident logs and escalation records for regulator audits.

United Kingdom

UK's Online Safety Act 2023 takes a hybrid approach. It imposes a duty of care on user-to-user and search service providers to identify, assess and mitigate risks from illegal and harmful content, including AI-generated or manipulated intimate images shared without consent. From 31 January 2024, distributing non-consensual deepfake intimate images became illegal.

The Act also tasks Ofcom with setting detailed Codes of Practice that define how platforms must handle illegal and harmful content, including synthetic intimate images. Accordingly, Ofcom has issued Protection of Children Codes of Practice and Illegal content Codes of Practice for providers of user-to-user services and search services.

Additionally, the UK Government has proposed to criminalize the creation and distribution of sexually explicit deepfakes, aligning with the broader Online Safety framework. Ofcom's guidance also stresses 'safety by design', meaning platforms must build detection, moderation and reporting tools into their architecture rather than relying on user complaints.

Compliance Takeaways:

  • Implement automated detection and reporting systems for explicit synthetic media.
  • Maintain and preserve audit trails and evidence logs for Ofcom inspections.
  • Integrate user-reporting flows that meet mandated response times.

United States

The US lacks a unified federal framework on deepfakes, but state-level laws are expanding quickly. Over 25 states have enacted statutes targeting election-related deepfakes that require disclaimers on AI-generated political content or banning deceptive synthetic media within a specific window before elections.

Further, the Federal Trade Commission (FTC) has begun addressing AI-generated deception under its consumer protection mandate, warning advertisers and influencers that synthetic endorsements or impersonations could trigger civil penalties for deceptive practices. The Federal Communications Commission (FCC) is also pushing for disclosure rules for AI-generated political ads.

Compliance Takeaways:

  • Monitor, maintain and adhere to state-specific election and deepfake policies.
  • Ensure all AI-generated endorsements and testimonials are clearly labelled and truthful.
  • Preserve documentation proving any AI-assisted ad content is non-deceptive and disclosed.

For platforms operating globally, compliance can't be reactive anymore. Deepfake regulation is fast becoming an operational compliance requirement across major jurisdictions. Many countries have already introduced enforceable obligations that compel companies to detect, label and remove AI-generated deceptive or harmful content, with heavy financial and legal consequences for failure. India's draft amendments signal that the same level of accountability is coming to its digital ecosystem. In addition, India has set quantifiable visibility thresholds and verification duties on platforms. A comparative table provides requirements at a glance for global teams.

Jurisdiction

Key Obligations

Enforcement Authority

Penalties

India (draft) Labels, metadata identifiers, uploader declarations, visibility thresholds MeitY Could result in loss of protection under S.79 of IT Act
Singapore Takedown orders, election-period restrictions Various agencies including Singapore Police Force Civil and criminal

Fine of up to SGD 1m plus up to SGD 100,000/day for continuing offences

Australia 24-hour takedown; criminal liability for deepfake misuse eSafety Commissioner Civil & criminal

Up to 2,500 penalty units for corporations

UK Duty of care; Ofcom Codes of Practice Ofcom Civil & criminal Fine of up to £18m or 10% of global turnover
US State-level election deepfake laws; FTC deceptive ad rules Various authorities like FTC, State level AG Civil penalties

Roadmap for Compliance Teams

We have highlighted a few of the tasks that compliance teams should focus on for readiness with the Indian draft rules and in line with other synthetic media compliance:

  • Regulatory analysis: Map obligations under the draft rules and identify gaps that exist under the current AI content governance structure of the business.
  • Automate processes: Implement automatic embedding of a permanent unique identifier or metadata tag at the point of content generation or upload.
  • Update internal policies: Revise due-diligence and content moderation policies to explicitly cover synthetically generated information.
  • Implement safeguards: Block all system and user paths that could remove, alter or suppress metadata or labels.
  • Monitor false declarations: Detect and flag users or patterns where synthetic content is uploaded without declaration.
  • Conduct label verification tests: Periodically audit samples to confirm label visibility, persistence and machine-readability.
  • Pre-upload declaration workflow: Modify upload interfaces to require users to confirm if content is synthetically generated before submission.
  • Incident escalation: Route verified violations to moderation teams for swift removal or corrective action.

Conclusion: Preparing for Indian and Global Deepfake Compliance

Deepfake regulation has shifted from an ethical debate to operational compliance. Jurisdictions are now converging on requirements that compel platforms to detect, label and remove deceptive or harmful AI-generated content. India's draft amendments once effective will bring its digital ecosystem into this global accountability framework.

How can our compliance management software Komrisk Help?

Komrisk is a compliance management software which can help compliance teams operationalise these obligations through:

  • Regulation tracking across jurisdictions (e.g., India's labelling rules, Singapore's OCHA directives, Australia's takedown timelines).
  • Automated control mapping regulatory duties to internal workflows.
  • Evidence logging and audit trails to demonstrate due diligence.

By integrating these capabilities, Komrisk helps organisations maintain proactive compliance, reduce manual oversight and stay ahead of fast-evolving synthetic media regulations. Get in touch with us for a demo of Komrisk.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More