ARTICLE
4 July 2025

Navigating AI-Related Legal Risks In Indian M&A Transactions

KC
Kochhar & Co.

Contributor

With more than 200 lawyers, Kochhar & Co. is one of the leading and largest corporate law firms in India (""Firm”) . Kochhar & Co. enjoys the distinction of being the only law firm with a full-service presence in the six (6) prominent cities of India namely: New Delhi, Mumbai, Bangalore, Chennai, Gurgaon and Hyderabad and four (4) overseas offices: Dubai, Singapore, Atlanta, Jeddah. The Firm offers a wide range of legal services in the area of Corporate & Commercial Laws, Dispute Resolution, Tax and Intellectual Property (IPR) and specializes in representing major foreign corporations with diverse business interests in India.
Indian companies, much like their global counterparts, are rapidly adopting artificial intelligence (AI) across internal functions and customer-facing offerings.
India Corporate/Commercial Law

Indian companies, much like their global counterparts, are rapidly adopting artificial intelligence (AI) across internal functions and customer-facing offerings. According to a 2024 survey by IBM, 59% of Indian enterprises (with over 1,000 employees) have actively deployed AI, which marks the highest adoption rate globally. Notably, 93% of companies in India plan to increase their AI investments in 2025, either by building proprietary AI systems in-house or by licensing third-party models and tools.

This surge in AI adoption is beginning to have tangible implications across the investment and M&A landscape. As companies increasingly rely on AI, investors and acquirers ("Investors") will have to account for a growing set of legal risks associated with AI and take steps to mitigate them.

This article unpacks some of the core legal issues that could arise from the target entity's use and development of AI and explores how they could affect Investors in M&A and other investment transactions.

Ownership of IP

One of the core concerns in AI-related legal due diligence revolves around ownership of intellectual property ("IP") rights. If the target entity has integrated or used AI to develop its assets or offerings, there needs to be an evaluation of whether the entity has valid IP rights over its AI technology. This ties into the broader question of whether works developed using AI are even capable of being protected under Indian IP law. The Copyright Act, 1957, as it exists today, requires human authorship over works, and courts have often refrained from recognising copyright claims due to lack of human intervention.

IP offices themselves have sent mixed signals. For instance, the Indian Copyright Office initially granted registration for an AI-generated painting, naming both the AI user and AI tool as co-authors. However, according to news reports, it subsequently withdrew the registration. Despite this, the registry still reflects the work as "registered", leaving the actual legal status uncertain.

Given this legal ambiguity, target entities may find it difficult to claim ownership and assert IP rights over their AI technology. This could leave such technology vulnerable, allowing competitors to freely replicate it without consequence and leaving Investors with limited recourse.

Due diligence should also include a close review of how AI was used in product development and whether the output qualifies for protection under Indian law. Where third-party AI tools are involved, it is crucial to examine the licensing terms and determine whether the target can claim ownership over the output.

Open-Source Licenses

Open-source software is increasingly popular among Indian companies, with many leveraging it to develop AI technology. This can raise open-source licensing concerns, especially where copyleft licenses, such as the GNU General Public License, are involved. These licenses require derivative works to be distributed on the same terms as the original license. As a result, target entities that develop AI technology using copyleft-licensed code may be obligated to release the source code of their technology, regardless of whether the technology was intended to be proprietary in nature. This makes it critical for Investors to examine how open-source code is used, to ensure it does not restrict the target entity's ability to commercially exploit its AI technology.

To mitigate this risk, Investors may either require removal or replacement of copyleft-licensed code as a condition precedent to closing, or seek specific representations, warranties, and indemnities addressing open-source licensing exposure.

Training Datasets

For targets that develop their own AI systems and tools, the legality of datasets used to train these systems becomes critical. Investors need to examine whether the training datasets are legally sourced and supported by a valid license.

Unlike countries such as Japan and Singapore, Indian copyright law does not provide a specific exception for text and data mining. In the absence of such a carve-out, using copyrighted data to train AI models for commercial use could potentially amount to copyright infringement.

In fact, the Delhi High Court is currently examining this issue in the ANI vs. OpenAI case, where Asian News International has alleged that OpenAI used its copyrighted news content without permission to train its large language model, ChatGPT. While the matter is currently sub judice, the outcome could shape how companies handle training data going forward.

Even where the target entity uses third-party AI tools, legal risks persist. If those tools were trained on unlawfully obtained data and the target entity's output is found to be infringing, it could still face legal exposure. This makes it essential to assess whether the providers of AI tools offer indemnities for such scenarios.

Data Protection

Apart from IP concerns, there are also questions on whether the acquisition of training datasets, which often include personal data, complies with Indian data protection laws. This is especially relevant given that India's new Digital Personal Data Protection Act, 2023 ("DPDPA"), is a consent-centric law with very limited non-consent grounds for processing. In most cases, obtaining consent from all individuals whose personal data forms part of the training dataset would be nearly impossible. Moreover, the limited non-consent grounds under the DPDPA would likely not apply to data collected for AI training purposes.

While the DPDPA does not apply to data "made publicly available by the data principal," it remains unclear what qualifies as such. For example, if someone uploads content to a social media platform but restricts visibility to their connections or sets their profile to private, it may not count as publicly available data. Furthermore, it is often difficult to determine whether personal data online was made available by the data principals themselves or by others. These nuances make it critical for Investors to assess the legal basis for processing any personal data forming part of the training datasets used by the target entity.

In addition, scraping data from various sources across the internet to develop training datasets could potentially trigger Section 43 of the Information Technology Act, 2000, which prohibits unauthorised extraction, downloading, or copying of data. There are also concerns around whether it would be permissible to scrape information from websites that specifically prohibit web scraping in their terms of use, as this could arguably lead to a breach of contract under the Indian Contract Act, 1872.

Confidentiality and Integrity

Another key issue for Investors is whether the target entity has inadvertently exposed confidential or proprietary information by inputting such information into third-party AI tools. The terms of use of many free or non-enterprise AI tools and platforms typically grant service providers a license over user inputs and outputs. This may allow service providers to retain and reuse the data indefinitely for model training or other purposes, including sharing it with third parties.

This becomes particularly concerning where the data is confidential in nature or constitutes trade secrets. From a risk standpoint, such use could compromise the integrity of data and expose it to unauthorised access or reuse. Investors should therefore carefully review the licensing terms of any AI tools used by the target entity to ensure that data ownership and control remain with the target entity at all times.

Investors should also assess the target entity's internal policies on employee use of third-party AI tools. Many Indian companies lack clear policies or supervision in this area. In such cases, employees may informally use external tools to assist them with their tasks, and in the process, unknowingly upload confidential, sensitive, or proprietary information without the target entity's knowledge. This could threaten the confidentiality of data and result in such data becoming part of the third-party AI's training corpus.

To mitigate these risks, Investors should evaluate whether the target entity has implemented safeguards such as AI monitoring or blocking tools. They should also assess AI literacy among employees, checking whether there are formal usage policies and whether employees receive training on responsible AI use.

Bias

AI systems trained using incomplete or underrepresented datasets can often reflect and amplify bias, leading to problematic outcomes. Though there are no specific Indian regulations addressing bias in AI datasets, there are Indian laws that prohibit discrimination against persons based on gender, disability, caste, and other protected characteristics. Consequently, Investors may face legal exposure if the AI system developed by the target entity perpetuates discriminatory outcomes.

For reference, Workday, an AI recruitment platform, is currently being sued in the US for allegedly discriminating against applicants based on race, age, and disability. While this case is outside India, it underscores the global scrutiny of AI bias. Investors should therefore verify whether the target has implemented meaningful bias testing protocols and has mechanisms in place to mitigate discriminatory outcomes.

Conclusion

Given the lack of legal precedent and regulatory clarity around many AI-related risks in India, such as questions on IP ownership and data protection, it can be difficult for Investors to confidently assess a target entity's compliance with applicable laws. Some risks may seem minimal today but could become more consequential as Indian regulations evolve and begin addressing these grey areas more directly. A thorough understanding of these issues during due diligence could help Investors avoid risks down the line.

To protect against potential future liabilities, Investors should consider seeking robust representations and warranties from the target entity, backed by appropriate indemnities. For issues that present more immediate legal or operational risks, Investors may also require the target entity to remedy these before closing, by setting them out as conditions precedent in the transaction documentation. This helps ensure that key risks are addressed upfront and do not carry over post-acquisition.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More