Introduction

On November 07, 2023, a day after a licentious video of actor Rashmika Mandanna surfaced on several social media platforms, she came out decrying about its authenticity. The actor's face unwittingly had been superimposed on the body of a British Indian influencer named Zara Patel.

This is an example of what is called a 'Deepfake' video because of the Artificial Intelligence tools used to manipulate the images and videos. The new images thus created are a form of disinformation and whether they are objectionable or not will depend on the context and how the same is perceived. The celebrities are an easy target and their objectionable videos most marketable commodity.

How are deepfakes created?

Deepfakes are videos creating delusion with the use of deep learning, AI and photoshopping techniques to create images of events to spread disinformation. The technologies namely, GANs (Generative Adversarial Networks) or ML (Machine Learning) are interplayed to create there videos.

That is how late actor, resurrection of Paul Walker was created for Fast & Furious 7. In 2020 Indian legislative assembly elections politician Manoj Tiwari's speech delivered in English was manipulated to be disseminated in the 'harayani' dialect. The creator of deepfake video however, is required to first train a neural network on many hours of real video footage of the person to give it is realistic understanding or touch to the video. Thereafter the trained network is combined with computer-graphics techniques to superimpose a copy of the person onto a different actor1.

Deepfake imagery could be an imitation of a face, body, sound, speech , environment, or any other personal information manipulated to create an impersonation.

Greatest risks

Gender Inequity

Deepfakes the term was coined in 2018 by a Reddit user who created a Reddit forum dedicated to the creation and use of deep learning software for synthetically face swapping female celebrities into pornographic videos. According to a report of an Amsterdam based cybersecurity firm, Deeptrace, deepfake pornographic videos are aimed and targeted majorly at women than men, thereby increasing gender inequality2. Women form 90% of the victims of crimes like revenge porn, non-consensual porn and other forms of harassment and deepfake is one more in the list.

Political Risks

In March 2022, a video message of Ukranian President, Volodymyr Zelenskky surfaced on social media platforms wherein the President is seen imploring Ukrainians to lay down their arms and surrender. Resultant, the President's office immediately disavowed the authenticity of the video and noted it to be deepfake. The video was the first high-profile use of deepfake during an armed conflict and marked a turning point of information operations3.

Financial Risks

Deepfakes are not just limited to creation of imagery and videos, infact as elaborated above, there exist AI tools to clone the voices of individuals to execute financial scams. About 47% of Indian adults have experienced or know someone who have experienced some kind of AI voice scam. As per McAfee's report on AI Voice Scams, around 83% of Indian victims responded by saying that they had a loss of money with 48% losing over INR 50,0004.

Laws regulating Deepfakes in India

The Ministry of Electronics and Information Technology has issued in its latest Advisory dated November 07, 2023 directing the significant social media intermediaries to5:

  • Ensure that due diligence is exercised and reasonable efforts are made to identify misinformation and deepfakes, and in particular, information that violates the provisions of rules and regulations and/or user agreements and
  • Such cases are expeditiously actioned against, well within the timeframes stipulated under the IT Rules 2021.
  • Users are caused not to host such information/content/Deep Fakes
  • Remove any such content when reported within 36 hours of such reporting
  • Ensure expeditious action, well within the timeframes stipulated under the IT Rules 2021, and disable access to the content / information.

The Information Technology Act, 2000

It further reiterated that any failure to act as per the relevant provisions of the Information Technology Act, 2000 (hereinafter referred to as the "IT Act") and Rules, Rule 7 of the Information Technology Rules (Intermediary Guidelines and Digital Media Ethics) Code, 2021(hereinafter referred to as the "IT Rules") would render organisations liable to lose the protection available under the Section 79(1) of the IT Act.

Section 79 (1) of the IT Act exempts online intermediaries from liability for any third party information, data, or communication link made available or hosted by him. While Rule 7 of the IT Rules empowers the aggrieved individuals to take platforms to court under the provisions of the Indian Penal Code.

Section 66E of the IT Act prescribes punishment, for violation of privacy of an individual through publishing or transmitting the image of private area of such a person without his or her consent, with imprisonment of three years with fine of INR 2 lakh.

Section 67, 67A and 67 B of the IT Act specifically prohibit and prescribe punishments for publishing or transmitting obscene material, material containing sexually explicit act and children depicted in sexually explicit act in electronic form respectively.

In case of impersonation in an electronic form, including artificially morphed images of an individual, social media companies have been advised to take action within 24 hours from the receipt of a complaint in relation to any content. In view of the same, Section 66D of the IT Act provides punishment of three years with fine upto one lakh rupees for anyone who by means of any communication device or computer resource cheats by impersonation.

Advisory for the Aggrieved

The Union Minister further encouraged the aggrieved persons to file First Information Reports (FIRs) at their nearest police station and avail remedies provided under the IT Rules.

Global Regulation on AI

The Bletchly Declaration – A collective effort in collaborative spirit

Twenty-nine countries, including the US, Canada, Australia, China, Germany, India alongside European Union have joined in to prevent the 'catastrophic harm, either deliberate or unintentional' that arise from the ever increasing use of AI6.

The Declaration, lays down a step forward for countries and nations to cooperate and collaborate on the existing and potential risks of the AI and sets in the agenda aimed at:

a. Identifying risks in the arena of AI;
b. Building respective risk-based policies across countries aimed at increasing transparency by private players developing frontier AI capabilities

Countries that have taken proactive steps towards curbing the menace of deepfakes

The UK government has planned to introduce national guidelines for the AI industry evaluating the implementation of legislation that would require clear labelling for AI generated photos and videos7.

The European Union has enforced Digital Services Act which obligates social media platforms to adhere to labelling obligations, enhancing transparency and aiding users in determining authenticity of media8.

South Korea passed a law that makes it illegal to distribute deepfakes that could cause harm to public interest with offenders facing up to five years or imprisonment or fines upto 50 million won (approximately 43,000 USD)9.

In January 2023, China, the Cyberspace Administration of China and the Ministry of Industry and Information Technology and the Ministry of Public Security, stressed have that the deepfakes must be clearly labeled in order to prevent public confusion10.

The United States, has advocated the Department of Homeland Security (DHS) to establish a task force to address digital content forgeries, also known as "deepfakes." Many states have enacted their own legislations to combat deepfakes.

Lawsuits that paved the way against Deepfakes – India

Closer to home, Bollywood actor, Anil Kapoor had filed a lawsuit after finding AI generated deepfake content using actor's likeness and voice to create GIFs, emojis, ringtones and even sexually explicit content. In this lawsuit, Anil Kapoor v. Simply Life India and Ors11. the Delhi High Court granted protection to actor's, Mr. Anil Kapoor, individual persona, and personal attributes against misuse, specifically through AI (Artificial Intelligence) tools for creating deepfakes. The Court granted an ex-parte injunction that effectively restrained sixteen (16) entities from utilizing the actor's name, likeness, image and employing technological tools such as AI for financial gain or commercial purpose. On the same line, the legendary actor Mr. Amitabh Bachchan in the case Amitabh Bachchan v. Rajat Negi and Ors12. was granted ad interim in rem injunction against the unauthorized use of his personality rights and personal attributes such as voice, name, image, likeness for commercial use.

Ways to combat

Technological Solutions – Use of Blockchain in combating deepfakes

Axon Enterprise Inc, the leading maker of US police body cameras, has upgraded its security devices that could aid discrediting deepfake videos. The rollout of Axon's Body 3 camera has emerged as a crucial evidence against the alleged police misconduct after defense lawyers questioned the integrity of police videos citing noticeable edits to shorten a scene or adjust a timestamp. The upgraded security camera has introduced additional security rendering captured footage inaccessible for playback, download or edit by default without a form of authentication such as a password13.

Responsibility and Accountability of social media platforms

Pictures and images are sensitive personal data of an individual that are capable of identifying that very individual as defined under the Digital Personal Data Protection Act, 2023. Deep fakes are, thus, breach of personal data and violation of the right of privacy of an individual. The Digital Personal Data Protection Act, 2023 (the Act) Data publicly available may not fall under the law but do the social media giants will still have to own up if the information put on their sites can be mined for the purposes of creating misinformation. Further the dissemination of this disinformation also takes place through the social media channels and controls have to put in place for the same. Youtube has recently announced measures requiring creators to disclose whether the content is created through AI tools. The need will be to create a uniform standardization that all channels can adhere to and are common across borders.

This Article was first published on Live Law here

To know more about this, read our article: https://www.barandbench.com/law-firms/view-point/the-digital-personal-data-protection-act-2023-a-scenario-of-arising-liabilities-2#:~:text=The%20Act%20imposes%20hefty%20penalties,to%20prevent%20personal%20data%20breach.

The Union Minister of Electronics and Technology has announced on November 17, 2023 the government is likely to unveil a framework to deal with the misuse of technology for AI and 'Deepfake' on November 24, 202314.

Footnotes

1 https://spectrum.ieee.org/what-is-deepfake

2 https://regmedia.co.uk/2019/10/08/deepfake_report.pdf

3. https://www.brookings.edu/wp-content/uploads/2023/01/FP_20230105_deepfakes_international_conflict.pdf

4. https://economictimes.indiatimes.com/tech/technology/almost-half-of-indians-experience-ai-enabled-fake-voice-scams-83-victims-lost-money-mcafee-survey/articleshow/99915954.cms

5. https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445

6. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

7. https://www.openaccessgovernment.org/uk-considers-clear-labeling-law-combat-ai-deepfakes/161861/

8. https://www.openaccessgovernment.org/uk-considers-clear-labeling-law-combat-ai-deepfakes/161861/

9. https://www.responsible.ai/post/a-look-at-global-deepfake-regulation-approaches#:~:text=The%20EU%20has%20proposed%20laws,of%20global%20revenue%20for%20violators

10. https://www.globaltimes.cn/page/202301/1283499.shtml

11. CS(COMM) 652/2023 and I.A. 18237/2023-18243/2023

12. 2022 SCC OnLine Del 4110.

13. https://www.reuters.com/article/us-axon-deepfakes-idUSKBN1WI0YG/

14. https://www.businessinsider.in/india/news/will-take-major-steps-on-deepfake-issue-wait-till-24th-november-rajeev-chandrasekhar/articleshow/105382553.cms

Related Posts

India sets the deepfake crackdown in motion

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.