- within Consumer Protection, Real Estate and Construction and International Law topic(s)
- in India
- with readers working within the Law Firm industries
"A Lawless Lens Is a Nation's Threat — Time to Regulate Deepfake Tech."
Abstract
Deepfakes are a new virtual hazard that has come about because of advances in artificial intelligence (AI). AI-created or AI-changed movies and audio recordings look and sound so real that they are being used more and more to lie, hurt someone's reputation, invade their privacy, or cause public disorder. India's laws were mostly made for human producers and traditional media, so they aren't very good at dealing with the complicated legal, moral, and technological issues that come up with deepfakes. This blog talks about how the Information Technology Act, the Indian Penal Code, and the Copyright Act don't do a good job of dealing with deepfakes. It also looks at important global court cases as Naruto v. Slater, Thaler v. Hirshfeld, and Feist Publications v. Rural Telephone Service to show how the current legal system is too focused on people. At the end of the paper, it suggests some building blocks for a specialised Indian legal system to manage deepfake technology, protect victims, and hold people accountable.
Introduction
Artificial Intelligence (AI) is changing how content is made and used. The deepfake is the most talked-about adverse effect. It's fake material that might make it look like someone did or said something they didn't. You can use these incredibly realistic modifications for satire, education, or fun. But they can also be used for political propaganda, fraud, non-consensual pornography, criminal defamation, and campaigns of misinformation. Like many other judicial systems throughout the world, India's is not well-equipped to deal with this cyberattack. Deepfakes are not like other digital works since they go against the basic ideas of legal principles that are founded on human authorship, intention, and responsibility. So, it is very important to look at the holes in the current regulations and come up with a new set of rules that can successfully fight the hazards of deepfakes.
Insufficiency of Current Legal Provisions in India
1. Information Technology Act, 2000
The IT Act is the main law that controls cybercrime and electronic communications in India. Sections 66E, 67, and 67A talk about invading privacy and posting obscene content; however, they don't talk about deepfakes that aren't obscene but are nevertheless false or defamatory. Also, the law presumes that people have purpose, which is hard to find or doesn't exist in AI-generated work
2. Indian Penal Code, 1860
- The IPC has rules against:
- Defamation (Section 499)
- Criminal Intimidation (Section 503)
- Public Mischief (Section 505)
However, these rules require mens rea, which is not feasible with autonomous AI systems. The IPC is based on people and can't include AI as a direct or indirect culprit.
3. Copyright and Personality Rights
Rights to Copyright and Personality Deepfakes break the law by using someone's likeness or voice without permission and by using movies, photos, or music that already exist. The Copyright Act of 1957, on the other hand, requires a human creator. Things made by AI are not legally owned. The law also doesn't give AI-altered likenesses any moral or economic rights right now.
Legal Challenges Presented by Deepfakes
- Defamation and Damage to Reputation: Deepfakes have been used to damage careers, reputations, and relationships. Indian defamation laws aren't ready to deal with how quickly and widely this kind of material spreads.
- Breaking someone's privacy: Using their voice or face without their permission is a clear infringement of their privacy. Indian women are especially worried about deepfake pornography.
- Misinformation and Threats to Democracy: Deepfakes can propagate false information, confuse voters, increase tensions between groups, and weaken governments.
- Questions about copyright and ownership: Who owns a deepfake? The programmer, the user, or the AI? It is hard to enforce IP laws when this question is still open.
- Admissibility of Evidence: Deepfakes make it hard to use video and audio evidence in court, which could even put justice at risk.
- Platforms' Responsibility: Deepfakes can spread quickly on social media, but current laws don't hold platforms very responsible for them.
International Cases Shaping the Legal Discussion
To see the limitations of current legislation, we refer to three historical cases indirectly touching on AI and authorship:
1. Naruto v. Slater (2018)
Citation: 888 F.3d 418 (9th Cir. 2018) from the U.S. Court of Appeals for the Ninth Circuit
Facts: A photographer's camera caught Naruto the macaque monkey taking a selfie. The photographs went viral, and PETA sued, saying that Naruto owned the rights to them.
Question of Law: Is it possible for a monkey to own a copyright in the United States?
Decision: The court decided that only Destroy-ish people can be copyright authors. Animals and AI systems, for example, do not have the right to assert or own copyrights.
Application to Deepfakes: Naruto's case shows a major flaw in modern copyright law: it only recognises human authorship. This philosophy says that AI-generated content, like deepfakes, isn't protected by copyright or liable under the law. This leaves a dangerous legal gap.
2. Thaler v. Hirshfeld (2021)
Court: U.S. District Court, Eastern District of Virginia
Citation: 558 F. Supp. 3d 238 (E.D. Va. 2021)
Facts: Stephen Thaler built an AI system called DABUS that came up with new ideas all by itself. He requested patents with DABUS as the inventor of the idea.
Legal Question: Is an AI system a legal inventor?
Ruling: The court said no. The U.S. Patent Act says that only real people can be inventors.
Relevance to Deepfakes: Can an AI system be legally called an inventor?
This case shows that AI doesn't have a legal identity, hence it can't be held responsible or claim rights. It doesn't make sense to hold the AI responsible for harm caused by a deepfake, according to current legal definitions.
3. Feist Publications v. Rural Telephone Service (1991)
Court: United States Supreme Court
Citation: 499 U.S. 340 (1991)
Facts: Feist Publications used real information from Rural Telephone's white pages to make its directory. Rural sued because they thought they had stolen their work.
Legal Question: Is it possible to copyright a group of facts?
Ruling: The court said that facts alone can't be copyrighted, and a database must have creative creativity to be protected. It needs more than just hard work.
Relevance to Deepfakes: Deepfakes often change or rearrange stuff that is already there. If there is no human ingenuity, then they can't be copyrighted, which makes it hard to enforce and own them.
Why India Needs a Specific Deepfake Law
Clear Definitions- To avoid confusion, make sure there are clear legal definitions for "deepfake," "synthetic media," and "AI-generated content."
Criminal Offenses- Make precise rules to punish:
- Making or sharing deepfakes for malicious reasons
- Sharing personal pictures without consent
- Fraud and deepfakes in politics
Civil Remedies for Victims- Let people sue for:
- Libel
- Breaking privacy
- Emotional discomfort
- Orders to take down damaging content
Regulate Intermediaries- Require platforms to:
- Find and tag deepfakes
- Get rid of dangerous content
- Give the uploaders' IP data
- Help law enforcement
Technological Countermeasures- To fight the consequences of false information, support deepfake detection software and public awareness initiatives.
Evidentiary Protocols- Change the Indian Evidence Act to include rules for checking digital content in the age of deepfakes.
Liability Mechanism- Even if the AI makes the deepfake, the following people must be held responsible:
- The user who started it
- The platform that hosted it
- Third parties that helped spread it.
Conclusion
Deepfakes are probably the most important and difficult legal problem of the AI age. The current Indian legal system, which is built on ideas about human agency, can't deal with the fact that synthetic media is anonymous, autonomous, and viral. International law, from Naruto to Thaler to Feist, shows that the legal profession is still having trouble with the idea that things other than people can't create, own, or be held liable under current laws. But the law won't stop technology. India needs to take action and create a solid and technical legal framework for deepfakes. The law should protect people from damage while also encouraging new ideas. If this doesn't happen, the legal hole will get much worse, putting our privacy, truth, and trust in the online world at risk.
References
- Thaler v. Hirshfeld, 558 F. Supp. 3d 238 (E.D. Va. 2021)
- https://www.thepatentplaybook.com/wp-content/uploads/sites/56/2021/09/Thaler-v.-Hirshfeld.pdf
- Feist Publications, Inc. v. Rural Telephone Service Co., 499 U.S. 340 (1991)
- https://supreme.justia.com/cases/federal/us/499/340/
- https://en.wikipedia.org/wiki/Feist_Publications,_Inc.,_v._Rural_Telephone_Service_Co.
- The Information Technology Act, 2000 (India)
- Indian Penal Code, 1860
- The Copyright Act, 1957 (India)
- https://www.peta.org/features/peta-foundation-legal/case-summaries/naruto-v-slater/
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.