ARTICLE
21 May 2025

Powering Progress: Key Legal Considerations For Using AI Systems In-house

I
IndusLaw

Contributor

INDUSLAW is a multi-speciality Indian law firm, advising a wide range of international and domestic clients from Fortune 500 companies to start-ups, and government and regulatory bodies.
Last year, the American Equal Employment Opportunity Commission ("EEOC") settled its first lawsuit against discrimination in hiring by an AI tool.
India Technology

1. INTRODUCTION

Last year, the American Equal Employment Opportunity Commission ("EEOC") settled its first lawsuit against discrimination in hiring by an AI tool.1 The EEOC sued 3 (three) companies that provided tutoring services under the "iTutorGroup" brand name ("iTutorGroup") alleging that iTutorGroup violated the Age Discrimination in Employment Act, 1967 ("ADEA"), as its AI software automatically rejected over 200 (two-hundred) female applicants aged 55 (fifty-five) years and above and male applicants aged 60 (sixty) years and above, due to their age. The parties eventually signed a consent decree according to which iTutorGroup agreed to pay USD 365,000 (United States Dollars Three Hundred and Sixty-Five Thousand) to the group of automatically rejected applicants, adopt anti-discrimination policies, and conduct internal trainings to ensure compliance with equal employment opportunity laws.This case illustrates a critical challenge in the field of artificial intelligence ("AI"). While AI offers immense potential for organizations, harnessing this transformative technology requires navigating hurdles presented by a complex legal landscape.

This article explores some of the key legal issues that organizations in India should take into account while deploying AI systems in-house.

2. OWNERSHIP OF INTELLECTUAL PROPERTY RIGHTS

Determining who owns the content or results generated by AI systems can be tricky. There are two major concerns when examining the interplay of intellectual property rights and AI systems: (i) ownership of intellectual property rights over the output generated using such AI systems; and (ii) infringement of intellectual property rights of third parties arising due to use of the AI system.

With regard to intellectual property ownership, copyright laws across jurisdictions only recognise natural persons as authors of works in which copyrights subsist. The Indian Copyright Act, 1957 ("Copyright Act") is no different and grants authorship of a computer-generated work to 'person who causes the work to be created'.2 In the case of AI-generated works, it is unclear who this 'person' is – is it the owner of the AI system, the user of the AI system, or the AI system itself? Alternatively, should ownership be granted to none of the foregoing persons, and should AI-generated works belong in the public domain?

The Beijing Internet Court in Li Yunkai v. Liu Yuanchun3 adopted an innovative solution to this issue. In this case, the plaintiff generated an image using prompts on an AI tool and then published the picture on a social media platform. The defendant used this image in his own article without the plaintiff's consent and published it on a blog platform. The plaintiff filed for copyright infringement. The court noted that the plaintiff selected and arranged the order of prompt words, set parameters, designed the presentation of the image, and selected the picture that he wanted out of the multiple AI-generated results. The final artwork was therefore original and an intellectual achievement and was eligible for copyright protection under Chinese copyright law.

However, as Chinese copyright law only recognizes natural persons as authors, and the creator of the AI tool was not directly involved in the creation of the image, the plaintiff was held to be the author and copyright owner of the image. Accordingly, the court held that the defendant's use of the plaintiff's AIgenerated image constituted copyright infringement.

In this way, courts might take a proactive approach to apply the spirit of the law to situations where the law does not provide clarity. Considering the current legal framework does not provide any clarity on owner of intellectual property in AI-generated content, AI system providers often do this contractually instead via their terms of use. For example, several AI system providers, in their terms of use,4 state that the end user shall own both the input data as well as the output generated by/from such input. Some AI system providers go one step ahead and offer to defend users from third party intellectual property rights infringement claims arising from use of the output generated the AI system.5 Therefore before implementing an AI system, a company needs to review their agreements with the AI system provider or the terms of use of the AI system and accordingly make use of the AI-generated output.

The second key concern is the infringement of intellectual property rights of third parties arising due to use of the AI system. This is because AI systems rely on various third-party input data that is used to train the AI system. Ensuring clear ownership and licensing for all such elements is crucial to avoid third party intellectual property rights infringement. Unless all the works in the datasets used by an AI system to generate the output are owned by the AI system, are duly licensed, or are freely available in the public domain, concerns about copyright infringement may arise in relation to the output data.

In recent times, several copyright infringement cases have been filed against companies AI systems providers for the alleged unlicensed use of copyrighted material to train the AI systems. One such case is Raw Story Media Inc. and AlterNet Media Inc. v. OpenAI,6 where news organizations sued OpenAI under the Digital Millennium Copyright Act ("DMCA"), alleging that their copyrighted journalistic articles were used to train ChatGPT without giving proper attribution. The plaintiffs claimed that OpenAI removed copyright management information ("CMI") from the copyrighted works before incorporating them into ChatGPT's training datasets, thereby violating Section 1202(b)(i) of the DMCA. The plaintiffs contended that such removal resulted in a concrete injury and heightened the risk of ChatGPT reproducing their works without proper attribution. However, the court dismissed the case on the grounds that the plaintiffs failed to demonstrate an actual or imminent injury, noting that the vast and diverse repository of data from which ChatGPT generates responses makes the risk of reproducing any specific article remote.

Similarly, in Andersen v. Stability AI,7 several visual artists filed a class-action lawsuit against Stability AI, Midjourney, and DeviantArt, alleging that their works were ingested into the AI image generator systems' training datasets without their permission. The court issued an order partially granting and partially denying motions to dismiss the plaintiffs' first amended complaint. Notably, while the court dismissed the DMCA claims for removal and alteration of CMI, it allowed the trademark, direct copyright infringement, and inducement claims to proceed.

Indian courts are also presently adjudicating upon this issue in the ongoing case of ANI vs. OpenAI.8 Asian News International ("ANI"), a news agency, had filed a copyright infringement suit in 2024 alleging that OpenAI improperly accessed its copyrighted material, including non-public subscription-based news articles to train ChatGPT, resulting in outputs that were either verbatim reproductions of or substantially similar to its copyrighted work. The Delhi High Court, which recognized that there is no existing jurisprudence in India on the matter in question, has identified several novel issues in this case, including whether the storage of ANI's news content for training ChatGPT or the use of such content to generate user responses amounts to copyright infringement, and whether such use qualifies as 'fair use' under Section 52 of the Copyright Act.

The verdicts in the above ongoing cases are expected to have ground-breaking impact on the AI ecosystem and will direct the manner in which companies offer as well as implement AI systems.

Considering the sheer volume of data that is used for training AI tools, it may be challenging for businesses to determine the content of the underlying datasets and to hedge against the risk of intellectual property rights violations. Accordingly, it is pertinent for enterprises to review the terms of use of the AI systems to determine the AI system provider's as well as their own liability in respect of any third-party intellectual property claims and include provisions in contracts with AI system providers to ensure that the organization is sufficiently protected against any such third-party copyright infringement claims.

3. LACK OF TRANSPARENCY

AI systems often operate on a 'black box' basis, wherein even the person developing the AI system is unable to determine the metrics adopted by the AI system in generating outputs, even if the inputs being provided to the AI to train and operate it are known.9 The 'black box' nature of AI systems restricts visibility over the internal decision-making processes of the AI system. This lack of transparency could give rise to legal concerns as regulators in sectors like finance, healthcare, and insurance – in which accountability is essential – and thus may require companies to provide a rationale on how they arrive at decisions when using such AI systems.

4. BIASED RESULTS

AI systems having inherent bias could cause significant issues. When models for developing AI systems are trained on data sets which may be biased, they may result in or reinforce instances of discrimination caused by decisions or outputs generated by the AI system. This could result in the violation of antidiscrimination laws, especially where AI tools are used in sectors relating to hiring, human resources, or financial services like insurance or lending.

Recently, in July 2024, the United States District Court of the Northern District of California in Mobley v. Workday, Inc.10 delivered the first-ever ruling holding an AI software vendor liable for employment discrimination caused due to the use of its AI hiring tool. Derek Mobley, an African American man aged over 40 (forty) years with anxiety and depression, had been rejected from over 100 (hundred) jobs that he had applied for using Workday. Inc. ("Workday") platform despite having the requisite qualifications. He filed a class action suit against Workday, alleging that Workday's algorithm-based applicant screening tools discriminated against him and other similar candidates on the basis of race, age, and disability.

The court observed that the applicable anti-discrimination laws prohibited discrimination by an "employer", which term included agents of employers. The court noted that Workday did qualify as an agent because its tools performed the traditional hiring function of rejecting certain candidates at the screening stage and recommending which ones to advance to subsequent stages through the use of artificial intelligence and machine learning. It ruled that employers cannot escape liability for discrimination by delegating their traditional functions such as hiring to a third party.

While this case sets an important precedent for the legal implications of using AI to automate hiring and other employment-related functions, it also sends out a larger warning - AI software providers can no longer expect to escape liability for the consequences of the use of their AI tools by their customers.

Considering these legal trends, bias mitigation strategies should be integrated into the research and development phases of AI systems. Companies should focus on using diverse datasets, employing bias prevention techniques like employing supervised learning models for the AI system, and undertaking continuous monitoring of the AI system by retraining the AI system with updated and representative data.

5. INACCURATE OUTPUTS

One of the most pressing issues with AI systems is the inaccuracy in results generated by the system. This inaccuracy primarily stems from insufficient, outdated, or poor-quality of input data used for training AI systems, flawed learning models, or obsolete models. Addressing such inaccuracies requires implementing control mechanisms to detect and correct errors in AI outputs as well as tracking past inaccuracies and using this information to refine AI models.

To mitigate these risks, companies can adopt several best practices including regular evaluation of the functionality of AI models, updation of the AI software, and audits of internal practices to maintain compliance with prescribed regulatory standards. Companies can also consider mandating human oversight for critical decisions.

6. DATA PROTECTION AND PRIVACY

The use of AI systems by companies can give rise to several concerns regarding data protection and privacy, especially in light of the Digital Personal Data Protection Act, 2023 ("DPDPA"). The use of AI systems may involve the processing of vast amounts of personal data which presents risks related to data privacy, consent, and protection.

The DPDPA requires that any processing of personal data be carried out based on lawful grounds, such as explicit consent or for certain legitimate uses.11 Since AI systems frequently rely on extensive datasets, often involving personal data, they must ensure that adequate consent from the data principals are obtained for such use of their personal data.

The DPDPA also mandates that companies collect and process only the minimum personal data necessary to achieve the specific purpose of the processing.12 Since AI systems are designed to process large volumes of data to enhance their performance, companies must implement data governance protocols to limit the collection of personal data to the minimum amount necessary for the functioning of the AI system. Additionally, the DPDPA grants data principals the right to withdraw consent for the processing of their data and to request the deletion of their personal data, and also obligates companies to delete personal data once the purpose of its processing has been achieved.13 To comply with the same, companies must implement data deletion mechanisms that allow for the effective removal of personal data. However, since AI systems often integrate personal data deeply within their models during training, identifying and removing such data might be technically challenging.

The overarching requirement of data security remains a crucial obligation under the DPDPA and cybersecurity directions issued under the IT Act, 2000 ("IT Act").14 Companies must adopt appropriate technical and organizational measures to safeguard the personal data processed by the AI system.15 This includes implementing encryption and access controls and conducting regular security audits to detect and mitigate potential vulnerabilities. Ensuring that personal data processed by AI software is adequately protected and that the risk of data breaches is minimized is essential for maintaining compliance with applicable laws, in order to avoid the imposition of significant penalties as well as loss of goodwill and consumer trust.

7. CURRENT INDIAN LAW

The legal framework governing AI technologies in India primarily consists of the IT Act and rules issued under it. The government is presently considering the introduction of the Digital India Act, 2023 ("DIA")16 to replace the IT Act. While there are no laws specifically tailored for AI technologies in India yet, the DIA is likely to better address emerging technologies like AI.

In the last few years, regulatory and industry bodies in India have issued certain policies, guidelines, and discussion papers that provide an indication of upcoming legal trends in the AI space. The NITI (National Institution for Transforming India) Aayog released the National Strategy for Artificial Intelligence - AIforAll, which aimed to harness AI for social and inclusive growth in alignment with the Government of India's AI roadmap, along with approach documents on principles for responsible AI17 and a discussion paper on facial recognition technology.18 The cybersecurity watchdog CERT-In (Indian Computer Emergency Response Team) released an advisory for AI language-based applications,19 and the NASSCOM (National Association of Software and Service Companies) issued guidelines on generative AI.20 The government has also implemented digital skilling initiatives21 to build public awareness about AI tools.

In March 2024, the Ministry of Electronics and Information Technology issued an advisory22 to intermediaries and platforms concerning the use of AI models, large language models, generative AI, and related algorithms. This advisory recommends several key actions, including, inter alia, that the intermediary/platform must ensure compliance with content-related regulations under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 and that they should prevent any bias or discrimination due to AI models and safeguard the integrity of the electoral process. Additionally, the advisory stresses the importance of clearly labelling AI-generated content and implementing consent mechanisms to inform users that content is produced by AI tools.

8. GETTING FUTURE-READY

As companies increasingly leverage AI systems to enhance procedural efficiency and decision-making capabilities, the in-house integration of AI technologies would require a proactive approach to both operational goals and legal compliance. Companies must start by assessing their specific needs, identifying where AI can deliver the most value with minimal risk, acquiring and preparing the necessary datasets, and implementing robust cybersecurity and privacy frameworks to ensure safe user experiences.

In the process of deploying such AI systems, companies must focus on privacy-by-design principles and secure data practices and may also need to implement user interface modifications such as privacy notices and consent pop-ups that enhance transparency and usability. Companies should also establish clearly documented processes and policies for the use of AI systems. These policies should aim to explain how AI is used in the company's products, services, and the applicable decision-making processes, and inform customers and employees about their respective rights to human intervention in cases where AI-generated decisions may affect them.

As the global focus on AI regulation intensifies, with countries taking a cue from the European Union's ("EU") tiered, risk-based regulation of AI models under the EU AI Act, Indian companies should also prepare to face similar scrutiny under upcoming laws. While comprehensive legislation is still on the horizon, companies must take proactive action to align with emerging regulatory standards in this rapidly evolving AI landscape.

Footnotes

1 Equal Employment Opportunity Commission v. iTutorGroup, Inc., No. 1:22-cv-2565-PKC-PK (E.D.N.Y. filed May 5, 2022).

2 Section 2(d)(vi) of the Copyright Act, 1957.

3 Li Yunkai v. Liu Yuanchun, (2023) Jing 0491 Min Chu No. 11279.

4 See here, here and here.

5 See here, here and here.

6 Raw Story Media v OpenAI, No. 24 Civ. 01514 (S.D.N.Y Nov 7, 2024).

7 Andersen v. Stability AI Ltd., 3:23-cv-00201, (N.D. Cal.).

8 ANI Media Pvt. Ltd. vs. Open AI Inc & Anr., CS(COMM) 1028/2024.

9 See here.

10 Mobley v. Workday Inc., U.S. District Court for the Northern District of California, No. 3:23-cv-00770.

11 Section 4 of the DPDA.

12 Section 6 of the DPDA.

13 Section 8(7) of the DPDA.

14 Available here.

15 Section 8(5) of the DPDA.

16 Available here.

17 Available here and here.

18 Available here.

19 Available here.

20 Available here.

21 See here and here.

22 Available here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More