Regulating Generative Artificial Intelligence

I
IndusLaw

Contributor

INDUSLAW is a multi-speciality Indian law firm, advising a wide range of international and domestic clients from Fortune 500 companies to start-ups, and government and regulatory bodies.
Ever noticed the picture of the Pope donning a luxuriously white puffer jacket rounding the internet or listened to 'Heart on my Sleeve' by the Weeknd and Drake (or in their voice)...
India Privacy
To print this article, all you need is to be registered or login on Mondaq.com.

I. Introduction

Ever noticed the picture of the Pope donning a luxuriously white puffer jacket rounding the internet1 or listened to 'Heart on my Sleeve' by the Weeknd and Drake (or in their voice),2 or wondered how ChatGPT is able to answer all your queries? The answer to all these is Generative Artificial Intelligence ("GenAI") models, in one form or another. These models have subtly camouflaged the difference between the real and the virtual worlds and presently, there is little that is untouched by Artificial Intelligence ("AI"). However, this overpour of AI generated content and the general takeover of the digital space by different AI models raises several legal and ethical challenges.

Some of these issues are also brought to light by the case filed by the New York Times ("NYT"), one of the most prominent news organizations, against some of the flagbearers of this revolution: OpenAI and Microsoft, for regenerating its copyrighted articles. In a recent development, OpenAI has filed a reply in court, and issued a blog responding to the allegations in the complaint filed by NYT. This case highlights the need to clearly delineate and restrict the role of GenAI models by holding them accountable under applicable laws. After the NYT suit, several other news outlets, including the Chicago Tribune, the Intercept, RawStory, and the Daily News, have also filed similar copyright infringement suits against NYT.3 This publication seeks to discuss some of these challenges emanating from the operation or use of GenAI models in the context of applicable Indian laws.

II. Intellectual Property Rights Issues

The NYT case is a classic example of the Intellectual Property Rights ("IPR") infringement claimed against Large Language Models ("LLMs") akin to ChatGPT. It highlights issues ranging from hosting of copyrighted material to their reproduction. The GenAI models are trained on data scraped off the internet (through a process known as 'data scraping') including copyrighted content including e-books, sites, journals and other subscription-based content.4 Following such training, a GenAI model memorises paywalled versions of articles and pictures.5 These models can then reproduce derivative version or even exact replicas of the copyrighted content.6 In its complaint, the NYT presented evidences to show that ChatGPT reproduced verbatim and derivative versions of its articles on prompts, which were otherwise only available to subscribed users.7

By making the copyrighted content available freely through GenAI models, the consumer is disincentivised from subscribing to the sites which are the original creators of such content. While this practice poses a huge threat to the revenue model of most of the digital content-based sites, it also induces huge profits for AI developers. The AI developers commercialise the content generated by the GenAI models by offering subscription services, and other affiliated revenue streams. Recently, OpenAI alone has reported a revenue of USD 80 (eighty) million per month. It is also set to exceed revenue of USD 1 (one) billion within the next year.8

In India such infringements are regulated under the Copyright Act, 1957 ("Copyright Act"). According to the Copyright Act, the author of copyright, in most cases, has exclusive economic rights to distribute/commercialize or make adaptations of original work.9 Any exercise of such exclusive rights without the permission of the owner of the copyrighted material including distributing such work for the purpose of trade or communicating such work in a place for profit constitutes an infringement of copyright.10

Courts have already held that even a website can be considered as a 'place for profit' if it has been used to communicate copyrighted content, pursuant to which the platform has generated revenue from such publication.11 However, if the alleged infringing party does not have any reasons to believe that the work that has been published by it was infringing work, it has been exempted from any liability under the Copyright Act.12 This was affirmed in the case of MySpace v. Super Cassettes Industries Ltd.,13 (SCIL) wherein it was held that when a case of infringement has been instituted against an organisation dealing with vast amount of data, providing a general notice to such organisation is insufficient and the notice should contain specific instances of copyright infringement.14

The party alleging such infringement will be required to present evidence that such developers had knowledge of 'specific' instances of such infringement. Upon such notice, if the GenAI developers fail to remove the copyrighted content from their systems, the injured party can seek remedies under the Copyright Act. These remedies range from injunctions to prohibiting the infringing activity to compensation for damages suffered due to such copyright infringement.15 Even after receiving such notice, if the LLMs akin to OpenAI either generate copied or adapted version of copyrighted material for profit, they can be held liable for copyright infringement.

One possible defence that can be sought by a GenAI or LLM developer is the defence of 'Fair Use' for processing the copyrighted data. In the US, section 107 of the USA Copyrights Act of 1967 postulates a four-factor test to determine if the use of copyrighted material by an organisation can qualify as fair use. Unlike USA, in India, organisations can only take the defence of 'Fair Dealing' if their activities are covered under specific exceptions given under section 52 of the Copyright Act. Such exceptions include 'private or personal use, such as research'16 or 'criticism or review'17 or 'reporting of current events and current affairs'18 or publication majorly of non-copyrighted material for the purpose of instructions.19 This in turn narrows the scope of the fair dealing defence that can be availed by the users or the platforms.

The viability of such defence also depends on the degree of transformation introduced by LLMs or GenAI models in the output, which is based on the copyrighted work. Such transformative work should not be a mere substitute for the original work or serve the same purpose as the original work.20 To what extent a developer can argue that the output generated by their GenAI tools is a transformative work will depend on the purpose, function and output of the GenAI tool. Reference can be drawn from the case of Authors Guilds v. Google Inc.21 The plaintiff in this case claimed that the books and scholarly articles made publicly available by Google constituted a copyright infringement of their materials.22 In response to the claim, Google took the defence of fair use. In this case, the court dismissed the suit by observing that inculcating search and snippet view features improving the user's ability to navigate digital copies was a transformative use of the copyrighted material by Google.23

Another important aspect in any copyright infringement suit is the evidence presented to the court by both the sides, namely, the developers and the users or the platforms. This presents separate set of challenges for both the parties. While it may be difficult for the GenAI models to demonstrate that the copyrighted material has been transformed to an acceptable degree, access to evidence itself is a roadblock for the users and platforms. It is highly unlikely that the platforms or the users will have knowledge of any copyrighted material stored by such models.

Further, the platforms might not have visibility, access or knowledge of specific instances of copyright infringement in GenAI models' response to any user prompt or inputs. Given the same, it will fall upon the platforms, suspecting of any possible infringement, to subscribe to these GenAI models, and curate prompts to inspect if GenAI models or LLMs might reproduce copyrighted material upon receiving an appropriate prompt. However, such evidence is more likely to be refuted by the GenAI models, claiming that such regurgitation is a rare instance and a response to an intentional manipulation of the system. Similar defence has been adopted by OpenAI in its response to the NYT suit.24 It will be interesting to see how these arguments and evidence weigh against each other in the copyright infringement suits, and how the court interprets the same.

Apart from replication of content available online, the GenAI models can also easily replicate personality traits attributable to an individual. The right to publicise or commercialise such traits is termed as 'personality rights'. While India does not appear to have any dedicated statute for regulation and protection of personality rights, these rights are protected under the extant IPR laws. The Indian courts have time and again granted protection to personality rights by prohibiting any unauthorised use of any distinctive traits of an individual.25 Recently, the Delhi High Court issued an injunction against any unlawful use of the actor Anil Kapoor's personality traits.26 The court found that using the actor's name as the domain name, generating his morphed pictures and voice through the use of GenAI models resulted in commercial as well as reputational harm to the actor.27 In the past, the court had also protected the personality rights of widely recognised personalities such as Amitabh Bachchan,28 Gautam Gambhir29 and Rajnikanth.30

Thereby, publication of content infringing personality rights can invite an injunctive order or other possible penal consequences against such activity. Currently, while personality rights in India are not specifically governed by any statutory law, in the United States, personality rights are specifically identified and governed by common law as well as statutory legislations, including state specific statues.31 For instance, the California Civil Code §3344 and §3344.1 prohibits commercial use of 'name, photograph, voice, signature or likeness' of a living or deceased person. Further, the code also provides for damages that can be claimed for violation of such statutory protection granted to an individual, and his estate after the death of the individual.32

III. Data Privacy & Data Security Issues

While the recent NYT case focuses more on the issues concerning infringement of IPR, the operation of GenAI models/LLMs, such as ChatGPT, have also raised several privacy and data protection related concerns. These issues range from unauthorised use of personal data to violation of consent requirements to questionable safety practices, which have been analysed by us in the context of not only the extant data protection and privacy laws of India but also the best practices embodied in the Digital Personal Data Protection Act, 2023 ("DPDP Act").33 We have discussed some of these issues in the following section.

A. Consent Requirement Issues

GenAI models are trained on large troves of data that is scraped off the internet. For instance, ChatGPT3 had 175 (one hundred and seventy five) billion data parameters, and ChatGPT4 has been trained with 570 GB (five hundred and seventy Gigabytes) of data and 300 (three hundred) billion words, all of which have been collated from data sources available online.34 Even though it could appear that such scraping activities and training of models are solely based on the processing of publicly available data,35 in reality, public data sets also include publicly available personal data from social media/dialogue-based sites such as Reddit, TikTok, etc.,36 as well as personal data that has become public as a result of a breach.37 The DPDP Act provides that personal data of a user can only be processed for lawful purposes based on the legitimate uses or consent of the users.38 It is indisputable that the data utilised to train LLMs is without any specific consent of the data principals (i.e., users of GenAI models) and hence, such use of personal data is in breach of the DPDP Act.

Another way in which the GenAI models collect personal data is through user registration and chat histories.39 Generally, users provide instructions to GenAI models which may have their personal information, and unknowingly give permission to GenAI models to process such personal information for generating content. For instance, ChatGPT, by default, processes personal data collected through the chat histories.40 While, users may later be allowed to opt-out of collection and processing of their personal data,41 there is no prior consent sought from the users for any kind of processing from the time of registration until the withdrawal. It must be noted that the DPDP Act requires the data fiduciaries to take prior consent of a user to process personal data through an affirmative action,42 other than for legitimate uses.43 Accordingly, such processing could potentially fall foul of the prescribed requirements for compliance with privacy laws in India.

B. Purpose Limitation Issues

Even where user consent is obtained prior to processing of personal data, with the advancement of technology, it is not inconceivable for the existing GenAI models to identify new patterns and insights from processing of personal data provided by the user, beyond the consented uses of such data.44 It can, thus, generate new use-cases by processing of personal data, making it difficult for AI developers to predict and provide an accurate account of purposes for which a user's personal data might be processed by GenAI models/LLMs. This could also lead to violation of specific consent and purpose limitation requirements under several privacy legislations.

Under the DPDP Act, for instance, data fiduciaries are required to restrict the processing of personal data to the consented specified purposes.45 However, processing of large volume of data by GenAI models, which can reveal new purposes for processing that haven't been consented to, will undermine the control of users over their personal data. To comply with the aforementioned requirements, the GenAI developers may opt to put in place a system that can acquire prompt consent from the users as and when there is a change in the purpose of processing the personal data.

C. Protecting Children's Personal Data

Furthermore, the GenAI models and LLMs often do not have any mechanism for the age verification of its users.46 One such example is the case of 'Cadillac Fairview Malls', investigation conducted by the Privacy Commissioner of Canada, the Information and Privacy Commissioner of Alberta, and the Information and Privacy Commissioner for British Columbia found that the company installed facial recognition technology to collect images of the visitors irrespective of their age.47 It then used this data to create a 'biometric numerical representation' of around 5 (five) million visitors including children, without their consent. This representation was used to determine the age, gender, and other such sensitive personal information of the visitors.48 Such tools, if not regulated, can enable an organisation to track children and take away their abilities to be anonymous in public spaces.

In India, the DPDP Act specifically prohibits data fiduciaries from collecting personal data of children without 'verifiable consent' of a parent or guardian. The DPDP Act also prohibits the processing of personal data of children in a way that it might have an adverse effect on them.49 Similarly, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 ("IT Intermediary Guidelines") requires intermediaries to take reasonable efforts to not host, display, upload, publish, transmit, store or share, among others, any information which may be harmful to a child.50 Accordingly, GenAI models operating in India should install filters to detect age of the users, and process such personal data of children only after having sought verifiable consent of a parent or guardian, subject to any exemptions notified by the Government.

D. Contraventions of Rights of the Data Principals

Recently, there was news about a user who generated his biography through ChatGPT, which contained several errors. Despite several requests sent by the user to the OpenAI, OpenAI refused to rectify the errors. The user filed a complaint against OpenAI in the Polish Data Protection Authority.51 Various data protection laws across the world, including the General Data Protection Regulation (GDPR), provides data principals with various rights relating to the correction and deletion of their personal information. The DPDP Act has also accorded various rights to data principals including the right to correct, update and complete any personal information of such data principals.52 The DPDP Act also recognises an essential part of the right to privacy of an individual - the 'right to erasure' of personal data.53

However, realisation of such rights against a GenAI model is difficult owing to AI's inability to unlearn. A GenAI model once trained with a particular set of data cannot unlearn the same, unless it is retrained completely.54 Another issue that stems from the unlearning incapacity of AIs, is that a GenAI model is likely to retain datasets indefinitely. The DPDP Act prescribes that any personal data should not be retained if it can be reasonably assumed that the purpose for which it was collected has extinguished or the users has withdrawn their consent, unless required by law.55 If the GenAI retains personal data beyond the permissible period prescribed under the DPDP Act, it will result in violation of the same.

E. Data Security Concerns

Emergence of GenAI models which store a large volume of data for training purposes, pose significant risks to the safety and security infrastructure of millions of gigabytes of data processed by developers of the AI models. Recently, the Personal Information Protection Commission (PIPC) of South Korea imposed fine on OpenAI for leaking chat histories, including payment details, of the ChatGPT subscribers.56 The United States Federal Trade Commission (FTC) has also opened an investigation into questionable data security practices adopted by OpenAI.57 The risk of data privacy breach becomes multi-fold with the GenAI models used for facial recognition and related services. This opens a pandora's box as it enables the applications to identify and track individuals, basis the data collected by facial recognition applications.58

In India, the DPDP Act obligates a data fiduciary to adopt 'reasonable security safeguards' to restrict any form of personal data breach.59 The DPDP Act also requires the data fiduciary to report any data breach to the Data Protection Board (DPB) as well as the concerned data subjects.60 The provisions under the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 too (which currently remain in force), require a body corporate to put in place reasonable security procedures and policies such as a comprehensive information security policy specifying security control measures.61 The Information Technology Act, 2000 ("IT Act") and the DPDP Act are also applicable to such body corporates which are situated outside India and offer goods and services to Indian residents.62> Pursuant to the said provisions of the DPDP Act and the IT Act, GenAI models processing personal data of persons located in India are required to adopt reasonable security practices to prevent any form of breach, and inform the relevant authorities in case of such breaches.

IV. Ethical Issues Concerning Operations of GenAI Models

In addition to the legal issues highlighted in the above sections, there are several ethical issues which are largely unregulated and require attention of the lawmakers. One of them being the spread of misinformation by GenAI models, which have led to claims relating to offences such as defamation. Recently, in USA, a radio host sued OpenAI for defamation for incorrectly enlisting his name as one of the defendants in a case of fraud and embezzlement.63 Similarly, in another case an Australian mayor Brian Hood alleged OpenAI for falsely accusing him of being a part of bribery case.64 In India, such cases of defamation would be punishable under the Indian Penal Code, 1860. However, it is debatable if the GenAI systems can possess 'intention to defame' which is an essential constituent of the offence, making it difficult to hold the developer of the GenAI system liable for defamation.65

Another such ethical issue has been generation of biased content by the GenAI models, as these models are trained on limited or unscreened data sets. As a result, the GenAI models internalise this bias and generate biased outputs. This becomes extremely sensitive as it may put certain individuals in a disadvantageous position owing to unintended discrimination ensuing from the inherent bias of the training associated with the relevant GenAI model. While this issue is largely unregulated in India, recently, with the intent to lay down responsible and trusted AI principles, the Union Ministry of Electronics & IT ("MeitY") issued an advisory to intermediaries and platforms instructing them to ensure that the computer resources in itself or through the use of any AI or GenAI models and software should not permit persistence of any bias or discriminations.66 Having said that, the binding nature of this MeitY advisory on AI is unclear.67 Apart from MeitY advisory, the IT Intermediary Guidelines also require significant social media intermediaries to review propensity of bias in the automated tools deployed by them for detection of any explicit content.68

With the advent of GenAI, deepfakes have become an unrestrained tool to spread misinformation. Deepfakes make it difficult to identify the difference between reality and fictions, which in effect may mislead the users or amplify the threat to their safety. To curb such deepfakes, MeitY cautioned the digital platforms against publication of deepfakes on their platforms, in view of rule 3(1)(b)(v) of the IT Intermediary Guidelines which prohibit hosting of any misinformation or patently false information.69 However, practically it is very challenging for digital platforms to differentiate the deepfakes from original content and hence digital platforms rely on intimation from users to identify and remove such deepfakes.

V. Concluding Thoughts

Given the above analysis, the GenAI developers should be mindful that the copyrighted material used by them for training and other purposes is used in accordance with applicable laws. This can be done through maintaining records identifying and documenting the data sources and procedures used for training of a model. Adopting aforesaid practices will also allow the developers to deploy GenAI models in a more fair and transparent manner. Another practice that can be adopted by the GenAI developers is audit of systems and processes at a regular interval to eliminate any bias or discrimination from such systems. At the same time, while deploying the GenAI models, the developers must critically ensure they have mechanisms in place to comply with extant data privacy and security laws. In this regard, it will be important to enable the exercise of rights granted to a user in relation to his/ her/ their personal data, including obtaining gradual consent with changing purposes for processing of such personal data.

On the other hand, the online platforms can adopt practices such as masking of data and anonymisation of data to prevent unauthorised scraping of data from their platforms. Even though individual platforms can implement above recommendation to prevent unauthorised scraping to a certain extent, it is time for the legislature and judiciary to enforce laws against the GenAI developers. The legislature should also proactively plug the gaps in laws which can be used by the GenAI developers to escape accountability without any consequences. In the absence of any other recourses currently available under laws, the entities seem to be filling copyright infringement suits against these conglomerates such as Microsoft and OpenAI. Further, considering that soon after the first litigation filed by NYT, two other litigations have been filed, it might set precedence for filing such cases against use of data by GenAI models or LLMs.

Footnotes

1 Isaac Freeman, 'Misinformation. Lies. And Artificial Intelligence' dated September 27, 2023 https://www.adelaide.edu.au/alumni/news/list/2023/09/27/misinformation-lies-and-artificial-intelligence#:~:text=Take%2C%20for%20example%2C%20deep%20fake,US%20Presidents%20with%20luscious%20mullets.

2 Joe Coscarelli, 'An A.I. hit of fake 'Drake' and 'The Weeknd' rattles the World' dated April 19, 2023, https://www.nytimes.com/2023/04/19/arts/music/ai-drake-the-weeknd-fake.html.

3. Yiwen Lu, 'Digital Media Outlets Sue OpenAI for Copyright Infringement' dated February 28, 2024, https://www.nytimes.com/2024/02/28/technology/openai-copyright-suit-media.html; Emilia David, ' New York Daily News, Chicago Tribune, and others sue OpenAI and Microsoft' dated May 01, 2024 https://www.theverge.com/2024/4/30/24145603/ai-openai-microsoft-new-york-daily-news-sue-copyright.

4 Lauren Leffer, 'Your Personal Information Is Probably Being Used to Train GenAI Models' dated October 19, 2023, https://www.scientificamerican.com/article/your-personal-information-is-probably-being-used-to-train-generative-ai-models/.

5 Alex Reisner, 'Revealed: the authors whose pirated books are powering GenAI' dated August 19, 2023, https://www.theatlantic.com/technology/archive/2023/08/books3-ai-meta-llama-pirated-books/675063/.

6 GenAI models such as ChatGPT and Stable Diffusion AI (prompt-based image generating GenAI model) are facing copyright infringement claims for copying and publishing copyrighted content; Blake Brittain, 'Getty Images lawsuit says Stability AI misused photos to train AI' dated February 6, 2023, https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/.

7 Dylan Walsh, 'The legal issues presented by GenAI' dated August 28, 2023 https://mitsloan.mit.edu/ideas-made-to-matter/legal-issues-presented-generative-ai.

8 The New York Times Company v. Microsoft Corporation, Openai, Inc. & Ors. (Case 1:23-Cv-11195) https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf.

9 Section 13 of the Copyright Act; and Section 14 of the Copyright Act.

10 Section 51 of the Copyright Act.

11 Section 51(a)(i) of the Copyright Act.

12 Section 51(a)(ii) of the Copyright Act.

13. MySpace v. Super Cassettes Industries Ltd, 2016 SCC OnLine Del 6382.

14 Id.

15 Section 55 of the Copyright Act.

16. Section 52 (a)(i) of the Copyright Act.

17. Section 52 (a)(ii) of the Copyright Act.

18. Section 52 (a)(iii) of the Copyright Act.

19. Section 52(h) of the Copyright Act.

20. Syndicate of the Press of the University of Cambridge on Behalf of the Chancellor, Masters and School v. B.D. Bhandari and Ors., 2011 (47) PTC 244 (Del).

21. Authors Guild v. Google, Inc., 13-4829-cv (2d Cir. Oct. 16, 2015).

22. Id.

23. Id.

24. 'OpenAI and Journalism' dated January 8, 2024, https://openai.com/blog/openai-and-journalism.

25 Titan Industries Ltd. v. Ramkumar Jewellers, 2012 SCC OnLine Del 2382; Amitabh Bachchan v. Rajat Nagi, 2022 SCC OnLine Del 411; and Gautam Gambhir v. D.A.P. & Co., 2017 SCC OnLine Del 12167.

26 Anil Kapoor v. Simply Life India, 2023 SCC OnLine Del 6914.

27 Id.

28Amitabh Bachchan v. Rajat Nagi, 2022 SCC OnLine Del 411.

29 Gautam Gambhir v. D.A.P. & Co., 2017 SCC OnLine Del 12167.

30 Shivaji Rao Gaikwad v. M/S.Varsha Productions, 2015 SCC OnLine Mad 158.

31. Montana v. San Jose Mercury News, Inc. 34 Cal. App. 4th 790, 793.

32. California Civil Code §3344(a); and California Civil Code §3344.1(a)(1).

33. While the DPDP Act has been enacted, the provisions of the DPDP Act have not come into force in India.

34 Alex Hughes, 'ChatGPT: Everything you need to know about OpenAI's GPT-4 tool' dated September 25, 2023 https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/.

35 Supra at 3, 4.

36 Gintaras Radauskas, 'Redditors on strike but company wants OpenAI to pay up for scraping' dated November 15, 2023, https://cybernews.com/news/reddit-strike-api-openai-scraping/#google_vignette; and Uri Gal, 'ChatGPT is a data privacy nightmare. If you've ever posted online, you ought to be concerned' dated February 8, 2023, https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283.

37 Data Security Council of India (NASSCOM) Exploratory Note On Privacy, Data Protection, and LLMs dated June 20, 2023, https://www.dsci.in/files/content/knowledge-centre/2023/Exploratory%20note%20June%202023.pdf.

38 Section 4 of the DPDP Act.

39 Michael Schade, 'How your data is used to improve model performance' https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance.

40 Id.

41 Id.

42 Section 6 of the DPDP Act.

43. Section 7 of the DPDP Act provides that a data fiduciary may process personal data of a data principal for following legitimate uses including (a) for the purpose for which data principal has provided personal data and have not indicated that it does not consent to the use of such Personal Information; (b) the purpose is for fulfilling any obligation under any law for the time being in force in India on any person to disclose any information to the State or any of its instrumentalities, subject to such processing being in accordance with the provisions regarding disclosure of such information in any other law for the time being in force; (c) the purpose is for compliance with any judgment or decree or order issued under any law for the time being in force in India, or any judgment or order relating to claims of a contractual or civil nature under any law for the time being in force outside India; (d) the purpose is for responding to a medical emergency involving a threat to the life or immediate threat to the health of the user or any other individual; (e) the purpose is for taking measures to provide medical treatment or health services to any individual during an epidemic, outbreak of disease, or any other threat to public health; or (f) the purpose is for taking measures to ensure safety of, or provide assistance or services to, any individual during any disaster, or any breakdown of public order.

44 Office of the Victorian Information Commissioner, 'Artificial Intelligence And Privacy – Issues And Challenges' https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-and-privacy-issues-and-challenges/.

45 Section 7(1)(a) of the DPDP Act, read with Section 6 of the DPDP Act.

46 Italian data protection authority imposed a ban on ChatGPT for several data privacy violations including non-existence of any filter for verification of its users age, https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9881490#english.

47 Joint investigation of the Cadillac Fairview Corporation Limited by the Privacy Commissioner of Canada, the Information and Privacy Commissioner of Alberta, and the Information and Privacy Commissioner for British Columbia (PIPEDA) Findings dated October 28, 2020, https://www.priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into-businesses/2020/pipeda-2020-004/#toc7-1.

48 Jasmine Irwin, Alannah Dharamshi, and Noah Zon, 'Children's Privacy in the Age of Artificial Intelligence' dated March 2021

https://www.csagroup.org/wp-content/uploads/CSA-Group-Research-Children_s-Privacy-in-the-Age-of-Artificial-Intelligence.pdf.

49 Section 9(1) of the DPDP Act; and section 9(2) of the DPDP Act.

50. Rule 3(1)(b)(iii) of the IT Intermediary Guidelines.

51 Natasha Lomas, 'ChatGPT-maker OpenAI accused of string of data protection breaches in GDPR complaint filed by privacy researcher' dated August 30, 2023 https://techcrunch.com/2023/08/30/chatgpt-maker-openai-accused-of-string-of-data-protection-breaches-in-gdpr-complaint-filed-by-privacy-researcher/.

52 Section 12 of the DPDP Act.

53 Section 12 of the DPDP Act.

54 'Do's and Don'ts for Developing, Extending, and Using Generative AI Models', dated May 2 2023 https://www.wsgr.com/en/insights/dos-and-donts-for-developing-extending-and-using-generative-ai-models.html.

55 Section 8(7) of the DPDP Act.

56 'Navigating Generative AI Privacy Challenges & Safeguarding Tips' dated September 21, 2023 https://securiti.ai/generative-aiprivacy/#:~:text=However%2C%20the%20potential%20of%20inadvertently,without%20the%20individual's%20explicit%20consent.

57. 'The Washington Post Civil Investigative Demand', https://www.washingtonpost.com/documents/67a7081c-c770-4f05-a39e-9d02117e50e8.pdf?itid=lk_inline_manual_4.

58 Hafiz Sheikh Adnan Ahmed, 'Challenges of AI and Data Privacy—And How to Solve Them' dated October 6, 2021 https://www.isaca.org/resources/news-and-trends/newsletters/atisaca/2021/volume-32/challenges-of-ai-and-data-privacy-and-how-to-solve-them.

59 Section 8(5) of the DPDP Act.

60 Section 8(6) of the DPDP Act.

61 Rule 8 of The Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011.

62 Section 1(2) of the Information Technology Act, 2000; and Section 3(b) of the DPDP Act.

63 Walters v. OpenAI L.L.C., No. 23-A-04860-2.

64 Byron Kaye, 'Australian mayor readies world's first defamation lawsuit over ChatGPT content' dated April 6, 2023 https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/.

65. Subramanian Swamy v. Union of India, 2016 SCC 7 221.

66. Meity advisory titled 'Due diligence by Intermediaries / Platforms under the Information Technology Act, 2000 and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021' dated March 15, 2024.

67. CNBC 'IT Minister Ashwini Vaishnaw clarifies that advisory on AI is not binding, aimed at social media companies' dated March 04, 2024 https://www.cnbctv18.com/technology/it-minister-ashwini-vaishnaw-clarifies-that-advisory-on-ai-is-not-binding-aimed-at-socialmediacompanies-19195391.htm.

68. Rule 4(4) of the Intermediary Guidelines.

69. 'MeitY issues advisory to all intermediaries to comply with existing IT rules' dated December 26, 2023, https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1990542.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More