- with Finance and Tax Executives and Inhouse Counsel
- with readers working within the Automotive, Oil & Gas and Law Firm industries
- in China
(This article is presented in two interconnected parts. Part One lays the conceptual foundation by examining the key antitrust and intellectual property challenges arising from AI and algorithmic markets. Part Two builds on that framework by analysing the Indian regulatory perspective, global enforcement trends and the broader policy conclusions that will shape the future of AI governance.)
- AI's Disruption of Patent Law and Inventorship
6.1 Can AI Be an Inventor?
AI systems have advanced to a point where they can autonomously generate a wide array of complex outputs that were once the exclusive domain of human expertise. Today, machine learning models are capable of proposing novel chemical formulations, identifying promising drug candidates and producing detailed engineering schematics that optimise both function and efficiency. They can write sophisticated software code, create innovative product designs and develop optimization algorithms that refine processes far beyond traditional human driven methods. This growing ability of AI to independently produce high value technical and creative outputs is reshaping the landscape of research, innovation and intellectual property.
Patent law is built on the principle that every invention must have an identifiable human inventor, but this foundation becomes increasingly unstable when AI plays a substantive role in generating the inventive concept. As algorithms begin to autonomously propose solutions, design structures or discover novel compounds, courts and patent office's face deep doctrinal contradictions. If no human can genuinely be said to have created the inventive step, does that render the invention unpatentable under existing law? Conversely, if a company names a human inventor despite the inventive contribution coming from an AI system, does this amount to a misrepresentation or even potential fraud on the patent office? These tensions also raise broader policy questions: should AI generated inventions fall automatically into the public domain and if so, would that undermine incentives for firms to invest in AI driven research and development? As AI's inventive capabilities continue to accelerate, these unresolved issues place significant strain on the traditional contours of patentability and innovation policy. Regulators worldwide have struggled with these questions, and a harmonized approach is still distant.
6.2. AI as a Tool vs. AI as a Creator
The line between AI functioning merely as an "assistive tool" and AI acting as a genuine "creative originator" has grown increasingly difficult to draw. In practice, most AI driven inventions emerge from a hybrid process in which humans define the problem or objective, the algorithm performs extensive optimisation and exploration and the system then generates potential solutions or outputs that would have been difficult for humans to conceive independently. These machine generated results are typically followed by human evaluation, adjustment and refinement, creating a collaborative inventive cycle in which the respective contributions of human and machine are deeply intertwined. This blended process challenges long standing assumptions about authorship and inventorship, complicating efforts to apply traditional patent doctrines to modern AI enabled innovation. Determining the locus of inventorship becomes a challenge. Patent policy must strike a balance between incentivizing innovation and ensuring legal certainty.
- Copyright Challenges with AI-Generated Content
AI generated content like text, images, videos, music, software is exploding across industries. However, this growth creates significant copyright risks.
7.1. Training Data Liability
Training datasets used to build modern AI systems often draw from vast and diverse sources, including copyrighted books, articles, images and videos, as well as proprietary datasets compiled by private entities. They may also incorporate scraped online content collected without explicit permission and in some cases, even fragments of private user data captured through digital interactions. Because AI models rely on large scale ingestion of such material to develop their capabilities, the legal status of these practices remains highly unsettled. At the heart of the debate is whether training a model on copyrighted or restricted data constitutes an infringing act, especially when the training process involves copying, storing and transforming large volumes of protected content.
These uncertainties raise several critical legal and policy questions. Does the doctrine of "fair use" or "transformative use" apply when copyrighted material is ingested not for human consumption, but for machine learning? Should companies be required to compensate creators whose works formed part of a model's training dataset? And where multiple actors are involved in an AI system's lifecycle, who ultimately bears liability for improper data use, the developer who trained the model, the platform that deploys it or the end user who generates outputs? Courts around the world are now confronting these issues and their decisions will shape not only the future of AI development, but also the balance between technological innovation and the rights of creators and data owners.
7.2. Output Similarity and Copyright Ownership
AI generated outputs can, at times, closely replicate or echo copyrighted expressions from the material on which the model was trained, creating a genuine risk of inadvertent copyright infringement for businesses that deploy such systems. Companies must therefore evaluate multiple layers of exposure: whether the outputs themselves infringe existing works, how ownership of AI generated content is defined or limited under applicable law and how contractual arrangements allocate rights and liabilities between users and the AI service providers powering these tools. The challenge is heightened by the fact that many jurisdictions do not recognise copyright protection for works lacking human authorship, leaving organisations with uncertain or non-existent rights to commercialise assets produced entirely or predominantly by AI. This legal ambiguity complicates product development, licensing strategies and IP portfolio management for enterprises integrating generative AI into their workflows.
- How Antitrust Must Adapt to AI-Driven Markets
The most pressing challenge is conceptual: Antitrust law was built for human strategic behaviour, not machine learning. AI forces regulators to rethink foundational doctrines.
8.1 Rethinking the Concept of "Agreement"
A central question confronting modern antitrust enforcement is whether algorithms that independently learn and converge on collusive outcomes should be treated as having formed an unlawful "agreement," even in the absence of human intent or communication. If AI systems autonomously stabilise prices, allocate customers or restrict output in ways that mirror cartel behaviour, regulators may need to rethink what constitutes concerted action. Several possibilities are emerging in global enforcement debates. One approach is strict liability, where companies deploying high risk pricing or optimisation algorithms are held responsible for collusive outcomes regardless of intent. Another is presumptive liability for shared AI vendors, recognising that common technology providers may facilitate or amplify coordinated strategies across multiple competitors. Regulators may also move toward treating algorithmic design choices as evidence of intent, especially where models are built to react to competitor behaviour or to optimise for industry wide profit rather than firm level performance. A more radical proposal is the creation of a new doctrinal category, an "algorithmic agreement" to capture coordination emerging from machine behaviour rather than human communication. Each of these options reflects the growing need to adapt legal frameworks to markets increasingly shaped not by explicit collusion, but by the emergent dynamics of autonomous systems.
8.2. Revisiting Market Definition
AI driven markets increasingly exhibit features that make traditional approaches to defining relevant markets far less reliable. These environments are shaped by multi sided interactions, where platforms must balance the interests of different user groups simultaneously, strong network effects, which cause value to grow as more users join and significant data driven switching costs, as users become locked into ecosystems through personalised data, historical preferences and integrated services. In addition, continuous dynamic model improvements where algorithms evolve, learn and adapt over time mean that competitive conditions shift rapidly and cannot be assessed through static snapshots. As a result, traditional market definition tools such as SSNIP tests or analyses based purely on static market shares may be inadequate for capturing the true competitive landscape within AI intensive sectors.
8.3. Expanding Dominance Analysis
AI is reshaping what it means for a firm to hold market power by creating entirely new dimensions of dominance that extend far beyond traditional measures like output or pricing control. Companies may exercise data dominance by controlling vast, high quality datasets that are essential for training competitive models. Others may hold model dominance, where proprietary architectures or foundational models become indispensable industry infrastructure. Firms with superior cloud capacity or specialised chips can develop compute dominance, enabling them to train and deploy advanced AI systems at a scale rivals cannot match. Platforms may also wield algorithmic dominance by using ranking, recommendation or search algorithms to steer user behaviour and shape market outcomes. Finally, access dominance emerges when companies control the APIs, interfaces or interoperability gateways that competitors need in order to participate meaningfully in the digital ecosystem. Together, these new forms of AI-enabled control require regulators to expand their understanding of dominance and adapt enforcement frameworks to reflect the realities of algorithmic markets.
8.4. Liability for AI Developers
A revolutionary shift is taking shape in competition enforcement as regulators increasingly recognise that developers of AI tools themselves may bear antitrust responsibility when their models facilitate collusion or exclusionary conduct. This marks a departure from the traditional focus solely on market participants and expands scrutiny to the architects of the algorithms that shape competitive behaviour. Potential liabilities for developers arise in several scenarios, designing models that are capable of predicting or reacting to competitor strategies in ways that stabilise prices, embedding optimisation functions that naturally promote uniform pricing across users, creating systems that optimise for industry wide profitability rather than firm specific outcomes or tuning and refining models using multi-client datasets that inadvertently align strategic behaviour across competitors. Collectively, these risks signal a profound expansion of antitrust enforcement, extending accountability beyond firms that deploy AI systems to include those that design, train and maintain the algorithms driving market dynamics.
- The Indian Context: Preparing for Algorithmic Competition Enforcement
While India has not yet adjudicated major algorithmic collusion cases, the Competition Commission of India (CCI) is increasingly conscious of algorithmic harms.
As India's digital economy expands rapidly, regulators are expected to focus increasingly on a range of algorithmic behaviours that can distort competition across key sectors. One major area of concern is preferential treatment on online marketplaces, where platform controlled algorithms may quietly prioritise their own products or favoured sellers. Enforcement bodies are also examining AI-driven discrimination in digital advertising, especially where ranking or bidding systems disadvantage smaller advertisers or amplify existing market power. Additionally, the emergence of data monopolies among dominant platforms where vast datasets become essential inputs for competitive AI development raises the risk of entrenched power that cannot be challenged by new entrants. Closely connected to this is exclusionary access to critical datasets, where dominant firms may deny or restrict data inputs needed by rivals to train competitive models.
Regulators are also increasingly scrutinising algorithmic pricing practices, particularly in e-commerce and travel sectors where dynamic pricing engines may stabilise prices or create coordinated patterns without explicit communication. Discriminatory API access where platforms quietly throttle, degrade or restrict technical access for competitors, presents another subtle form of exclusion that can be difficult to detect yet highly damaging. Finally, AI enabled predatory targeting of customer segments, in which algorithms identify vulnerable or high churn consumers and selectively deploy loss leading prices or exploitative offers is emerging as a serious concern. As these issues become more widespread, algorithmic market abuses are poised to become central to India's competition enforcement agenda, requiring both updated regulatory tools and a deeper understanding of AI's role in shaping market outcomes.Top of Form
Bottom of Form
- International Developments and Global Regulatory Alignment
Across major jurisdictions, regulators are rapidly constructing new governance frameworks to address the competitive risks posed by AI driven markets, signalling a global shift toward stricter oversight of algorithmic behaviour. In the European Union, the AI Act, Digital Markets Act and evolving competition guidelines emphasise AI transparency, algorithmic explainability and comprehensive platform regulation, particularly for gatekeeper platforms with significant market power. Similarly, the United Kingdom is advancing principles based oversight through the CMA and the Digital Regulation Cooperation Forum, with a strong focus on algorithmic accountability, proactive monitoring of AI driven market distortions and heightened merger scrutiny for acquisitions involving data rich or AI capable firms. The United States, through agencies like the FTC and DOJ, is increasingly concerned with the intersection of AI and antitrust, pursuing investigations into model training practices, discriminatory outcomes and potential collusion facilitated by shared algorithms. In markets like Australia, Japan and South Korea, regulators are adopting hybrid approaches combining competition law, consumer protection and data governance to tighten oversight of digital platforms and AI developers.
A common thread across these jurisdictions is the growing recognition that data access, model transparency and developer responsibility are crucial to preserving competitive markets in the age of AI. Regulators are not only scrutinising how platforms deploy algorithms but are also examining the upstream entities that create and supply AI technologies, signalling that AI developer liability is becoming an integral part of enforcement. This includes assessing whether model architectures facilitate collusion, whether training datasets embed biases that distort competition and whether API or infrastructure control amounts to a form of market foreclosure. Collectively, these global initiatives reflect a coordinated movement toward modernising competition law to address the realities of AI driven markets, where traditional tools are insufficient and algorithmic dynamics can rapidly shape market outcomes across borders.
- Conclusion: A New Era of Algorithmic Competition and Creativity
Artificial Intelligence is transforming how markets operate and how innovation is produced. The rise of algorithmic collusion, predictive coordination, data-driven dominance, exclusionary AI practices, generative systems and machine created inventions demands a sophisticated evolution of antitrust and intellectual property frameworks.
As AI becomes embedded in the infrastructure of commerce, regulators and businesses must confront a central truth: The future of competition will not be determined solely by human strategy, but by the design, deployment and accountability of the algorithms that increasingly shape market outcomes.
Businesses that anticipate these shifts, implementing robust governance, transparent AI design, careful data stewardship, and sophisticated IP strategy will lead the next decade of ethical, compliant and competitive digital transformation.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.