PIB's Response: How To Identify AI Generated Images?

S.S. Rana & Co. Advocates


S.S. Rana & Co. is a Full-Service Law Firm with an emphasis on IPR, having its corporate office in New Delhi and branch offices in Mumbai, Bangalore, Chennai, Chandigarh, and Kolkata. The Firm is dedicated to its vision of proactively assisting its Fortune 500 clients worldwide as well as grassroot innovators, with highest quality legal services.
Artificial Intelligence (AI) has rapidly emerged as a pivotal force in India's technological landscape, becoming a focal point of discussion across various sectors.
India Technology
To print this article, all you need is to be registered or login on Mondaq.com.


Artificial Intelligence (AI) has rapidly emerged as a pivotal force in India's technological landscape, becoming a focal point of discussion across various sectors. From enhancing business efficiencies to revolutionizing healthcare and education, AI's impact is profound and far-reaching. However, with its remarkable capabilities come significant challenges and ethical considerations. One notable concern is the rise of AI-generated images or videos, commonly known as deepfakes which have sparked widespread alarm globally, affecting everyone, be it celebrities, politicians, or everyday individuals like. 1

More than 75% of Indians present online and surveyed by Cybersecurity Company McAfee have seen some form of deepfake content over the last 12 months, while at least 38% of the respondents surveyed have encountered a deep fake scam during this time, says McAfee survey report.2 During the 12-month period under the survey, every fourth Indian came across political deepfake content which was later found to be fake, McAfee said. There has also been a rise in the case of deep fake scams which impersonate not only users but prominent figures across spheres such as business, politics, entertainment, and sports as well, the survey said.

AI technologies have capabilities to generate and disseminate false information at an alarming scale and speed. With the ongoing 2024 elections, the proliferation of AI-generated images/videos had posed a significant challenge by influencing the public perception and making it difficult to distinguish between real and manipulated content. 3

Experts say with technologies like generative AI, the menace of deepfakes is only going to get worse. There has been a 10-fold increase in complaints related to morphed images or deep nudes created through advanced tools, cyber experts had told ET in August.4

Hence, spotting a deepfake becomes crucial but before diving into identification techniques, it is important to understand what a deepfake is and how they are created.

Deepfakes are videos creating delusion with the use of deep learning, AI and photoshopping techniques to create images of events to spread disinformation. The technologies namely, GANs (Generative Adversarial Networks) or ML (Machine Learning) are interplayed to create these videos. 5

For an in-depth explanation with regard to technology behind deepfakes and its evolution, kindly refer to our article titled "Deepfake Technology: Navigating the Realm of Synthetic Media"6

Government releases an Informative Video: How to identify AI-generated images

In response to this challenge on how to spot a deepfake, the Press Information Bureau (PIB) recently released an informative video aimed at educating the public on how to identify AI-generated images without relying on technology. The video offers practical tips for spotting unrealistic images.

To address this, PIB shared four simple examples and advised viewers to look for specific indicators of AI manipulation, namely:

  1. Unrealistic Images:
    Closely examine hair strands, body shape, skin with no pores and fabric textures. Look for objects defying gravity, objects intersecting in unrealistic manners, hinting at AI-generated elements.
  2. Decoding Strange Lighting and Shadows:
    Be wary of unrealistic shadows or unusual color contrast defying natural lighting logic. Look for shadows in multiple directions from the same source or appearing even on transparent objects.
  3. Uncanny Symmetry:
    Look for eyes, eyebrows, lips and other facial features appearing perfectly symmetrical, as it meticulously mirrored.
  4. Repetitive Patters:
    Also, identity repetitive patterns like brickwork lacking any variation, exceeding the subtle differences found in real construction.
  5. Exposing Logical Inconsistencies:
    Look for objects defying basic laws of physics, impossible animal movements, or illogical object interactions.

Additionally, some basic clues such as missing fingers, mismatched textures, and text anomalies can also help in spotting fake images.

The video emphasized the importance of being vigilant about details like objects defying gravity or high symmetry in faces, which are signs of AI manipulation.

The video also mentioned several online resources/tools that can assist in identifying if any photo is generated by AI or not, providing viewers with tools to critically analyze visual content.

Advisory to social media intermediaries to identify misinformation and deepfakes:

Deepfake has emerged as a serious threat to democracy and social institutions across the world. Propagation of deepfake content via social media platforms has aggravated this challenge. In response to the same, Ministry of Electronics and Information Technology (MeitY) has, from time to time, advised social media intermediaries to exercise due diligence and take expeditious action against deepfake. 7

In the press release of a pertinent advisory being issued to social media intermediaries on November 07, 2023 to identify misinformation and deepfakes8, the Hon'ble Union Minister of State for Skill Development & Entrepreneurship and Electronics & IT Shri Rajeev Chandrasekhar said, "Safety and trust of our Digital Nagriks is our unwavering commitment and top priority for the Narendra Modi Government. Given the significant challenges posed by misinformation and deepfakes, the Ministry of Electronics and Information Technology (MeitY) had issued a second advisory within the last six months, calling upon online platforms to take decisive actions against the spread of deepfakes."

Elaborating further, the advisory mandated all online platforms to remove any such content/information/deepfake within a period of 36 hours upon receiving a report from either a user or government authority and failure to comply with this requirement shall invoke Rule 7, which empowers aggrieved individuals to take platforms under the provisions of the Indian Penal Code (IPC).

Hence, the ministry while summing up, encourages every individual to file First Information Report (FIRs) at your nearest police station and avail the remedies provided under the IT Rules, 2021.

Tech Responses to Spotting Deepfakes:

With AI-generated deepfakes cropping up almost daily, affecting people worldwide-not just in India- it is becoming increasingly difficult to discern what is real from what is not. These fake images, whether featuring Taylor Swift or Donald Trump, might seem harmless at first glance but can be used for nefarious purposes such as scams, identity theft, propaganda, and election manipulation.

Recognizing the growing threat, leading experts in generative AI have identified some tips to help spot these deceptive images9:

  • Electronic Sheen: Many deepfake photos, especially those of people, exhibit an "aesthetic sort of smoothing effect" that makes the skin look incredibly polished, giving it an unnatural electronic sheen.
  • Consistency of Shadows and Lighting: Often, the subject in a deepfake image is in clear focus and appears convincingly lifelike, but elements in the backdrop might be as realistic or polished. Checking for consistent shadows and lighting can help identify inconsistencies.
  • Face-Swapping: This is one of the most common deepfake methods. Experts suggest closely examining the edges of the face. Does the facial skin tone match the rest of the head or body? Are the edges of the face sharp or blurry? For suspected doctored videos, observe the person's mouth. Do their lip movement match the audio perfectly? Also, examine the teeth. Are they clear or blurry and inconsistent with how they appear in real life?
  • Cybersecurity companies note that current algorithms might not be sophisticated enough to generate individual teeth accurately, so a lack of defined teeth outlines could be a clue.
  • Context: Sometimes, context is crucial. Experts advise to take a moment to consider whether what you are seeing is plausible. Deepfakes often place people in unrealistic scenarios, so questioning the context can also be a helpful strategy.

Another approach is to use AI to combat AI. Microsoft has recently developed an authenticator tool that analyses photos or videos to provide a confidence score indicating whether they have been manipulated.

Similarly, Intel's FakeCatcher uses algorithms to analyze an image's pixels to determine its authenticity. It is the world's first real-time deepfake detector that return results in milliseconds.10

However, tools like Microsoft's authenticator are only available to selected partners and not the general public. This restriction is intentional to prevent bad actors from gaining an edge in the deepfake arms race.

Legislative Response to Online Impersonation and Deepfakes

Lawmakers in at least 17 states have enacted laws specifically addressing online impersonation considering that how important social media have become a part to daily life. These laws address the intent to intimidate, bully, threaten, or harass individuals through social media, email or other electronic communications. These states include California, Connecticut, Florida, Hawaii, Illinois, Louisiana, Massachusetts, Mississippi, New Jersey, New York, North Carolina, Oklahoma, Rhode Island, Texas, Utah, Washington and Wyoming. 11

Starting in 2019, several states began passing legislation aimed at addressing the use of deepfakes. These laws do not solely target AI generated deepfakes but broadly cover any deceptive, manipulative audio or visual images created with malicious intent that falsely depict individuals without their consent. Most of these laws focus on sexually explicit or pornographic video images, with some expanding existing laws on non-consensual intimate images. In addition, some states have started to prohibit the distribution of manipulated media aimed at damaging a candidate's reputation or deceive voters in elections.

Looking ahead to the 2024 legislative session, at least 40 states have pending legislation related to this issue, with at least 20 bills already enacted. This wave of legislation underscores the growing recognition of the threat posed by online impersonation and deepfakes, and the need for robust legal frameworks to protect individuals from the harmful effects of deepfakes and other forms of online deception.

For instance, Florida requires certain political advertisements, electioneering communications, or other miscellaneous advertisements to include a specified disclaimer and provides for criminal and civil penalties. While, Alabama enacted a bill providing that a person commits the crime of creating a private image if he or she knowingly creates, records, or alters a private image when the depicted individual has not consented to the creation, recording, or alteration and the depicted individual had a reasonable expectation of privacy.


The public response to the PIB's initiative has been overwhelmingly positive. Many internet users have expressed their gratitude for the informative and insightful content.

As AI technology continues to evolve, it is essential for the public to stay informed both about its benefits and potential pitfalls. Such efforts in spreading awareness and empowering individuals are crucial to help individuals in deciphering reality from artificial manipulation, especially in critical contexts like elections, thereby fostering a more informed and vigilant society.

To know more about the AI has revolutionized 2024 elections and what global efforts have been adopted to regulate it, kindly refer to our article titled: Why are free and fair elections in 2024 a challenge?12


1. https://ssrana.in/articles/deepfake-technology-navigating-realm-synthetic-media/

2. https://economictimes.indiatimes.com/tech/technology/75-indians-have-viewed-some-deepfake-content-in-last-12-months-says-mcafee-survey/articleshow/109599811.cms?from=mdr

3. https://ssrana.in/articles/pil-and-eci-response-on-deepfakes/

4. https://economictimes.indiatimes.com/tech/technology/ettech-explainer-real-or-not-how-to-spot-a-deepfake/articleshow/105040628.cms?from=mdr

5. https://ssrana.in/articles/deepfakes-and-breach-personal-data/#:~:text=Deepfakes%20are%20videos%20creating%20delusion,of%20events%20to%20spread%20disinformation.

6. https://ssrana.in/articles/deepfake-technology-navigating-realm-synthetic-media/

7. https://pib.gov.in/PressReleasePage.aspx?PRID=1979042#:~:text=Propagation%20of%20deepfake%20content%20via,take%20expeditious%20action%20against%20deepfake.

8. https://pib.gov.in/PressReleasePage.aspx?PRID=1975445#:~:text=They%20are%20further%20mandated%20to,Indian%20Penal%20Code%20(IPC).

9. https://apnews.com/article/one-tech-tip-spotting-deepfakes-ai-8f7403c7e5a738488d74cf2326382d8c

10. https://www.intel.com/content/www/us/en/newsroom/news/intel-introduces-real-time-deepfake-detector.html#gs.9ywwrs

11. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation#:~:text=Beginning%20in%202019%2C%20several%20states,depict%20others%20without%20their%20consent.

12. https://www.barandbench.com/law-firms/view-point/why-free-and-fair-elections-in-2024-a-challenge

For further information please contact at S.S Rana & Co. email: info@ssrana.in or call at (+91- 11 4012 3000). Our website can be accessed at www.ssrana.in

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More