ARTICLE
19 November 2025

Who Owns The Machine's Mind? Protecting Deep Learning Models

Ka
Khurana and Khurana

Contributor

K&K is among leading IP and Commercial Law Practices in India with rankings and recommendations from Legal500, IAM, Chambers & Partners, AsiaIP, Acquisition-INTL, Corp-INTL, and Managing IP. K&K represents numerous entities through its 9 offices across India and over 160 professionals for varied IP, Corporate, Commercial, and Media/Entertainment Matters.
Artificial intelligence is redefining innovation, but the legal frameworks meant to protect innovation haven't kept up.
India Intellectual Property
Hardika Dave ’s articles from Khurana and Khurana are most popular:
  • within Intellectual Property topic(s)
  • in United Kingdom
  • with readers working within the Technology, Media & Information and Law Firm industries
Khurana and Khurana are most popular:
  • within Real Estate and Construction, Corporate/Commercial Law and Technology topic(s)

Abstract

Artificial intelligence is redefining innovation, but the legal frameworks meant to protect innovation haven't kept up. Deep learning models the backbone of modern AI embody immense economic and intellectual value, yet they don't fit neatly into traditional intellectual property (IP) categories like patents, copyrights, or trade secrets. As technology evolves, so do threats like model theft and replication through model extraction attacks? In this context, technological self-defense tools such as watermarking and fingerprinting have emerged to fill the legal vacuum. This blog explores how these mechanisms can bridge the gap between law and technology, their potential use as evidence in court, and the urgent need for a hybrid framework that combines legal reform with technical innovation.

Introduction

In today's digital economy, deep learning models are the real capital. They power self-driving cars, voice assistants, medical imaging systems, and more. But despite their value, protecting these models under existing IP law is like trying to fit a neural network into a 1950s statute it just doesn't align. Here's the problem that traditional intellectual property law wasn't built for this kind of asset. It was designed for human creativity something tangible, documented, and manually created. AI models, by contrast, are dynamic, adaptive, and often self-optimizing. Their "inventiveness" arises from statistical training, not human artistry or engineering in the conventional sense.

This mismatch creates a grey zone. While the technology industry races forward with increasingly sophisticated models, the legal system still debates who (or what) can even be considered an inventor or author. Meanwhile, model theft, unauthorized replication, and AI "piracy" have become genuine commercial threats.

Why Traditional IP Tools Fall Short

Patents, copyright, and trade secrets our usual trio of IP protection simply isn't designed for AI systems.

  • Patents demand disclosure. Once an inventor files for a patent, their invention becomes public information. For AI developers, that's risky. Revealing the model's structure, algorithms, or training data means exposing trade secrets that competitors could exploit. Worse, the patent process is painfully slow compared to the speed of AI innovation. By the time a patent is granted, the underlying model architecture may already be outdated.
  • Copyright protects "expression," not "function." In AI, the distinction between the two is blurred. While source code may be copyrighted, the trained model parameters the actual "intelligence" are viewed as functional, not expressive. That means the most valuable part of an AI system often lies outside copyright's protection.
  • Trade secrets rely on confidentiality. This works only as long as the secret remains secret. Once a model is leaked, stolen, or reverse-engineered through model extraction attacks (where repeated queries mimic the model's behavior), the protection evaporates. The rise of open-source AI models further complicates things, as sharing architecture publicly can inadvertently waive trade secret claims.

So we're in a strange zone. The law lags behind, and technology evolves faster than courts can reason through it. That's where technical self-defense mechanisms like watermarking and fingerprinting step in.

What Are Watermarking and Fingerprinting in AI Models?

Think of watermarking as embedding a secret signature inside the model itself something invisible to users but verifiable by the creator.

For instance, a company might train an image recognition model to classify a specific, otherwise meaningless pattern of pixels kind of trigger image in a predictable way. That hidden response acts as a watermark. If a suspiciously similar model appears elsewhere, the original developer can prove ownership by demonstrating that it reacts to the trigger in the same way.

Fingerprinting, by contrast, works externally. It identifies a model based on how it behaves. Even without access to its source code, unique patterns in the model's responses can reveal whether two models share the same architecture or training lineage like comparing digital DNA.

Both methods aim to do what the law cannot yet do efficiently: provide technical evidence of originality and misuse.

This is especially relevant in cases of model theft or unauthorized deployment, where direct proof is hard to obtain. Imagine a startup discovering its proprietary AI system running inside a competitor's product. Without a watermark or fingerprint, proving theft could be nearly impossible.

Legal Implications: Where Tech Meets Law

The introduction of watermarking and fingerprinting could revolutionize how courts view AI ownership disputes. They create something tangible machine behavior that can serve as evidence of authorship.

However, this raises complex questions.
Can algorithmic behavior qualify as proof in court? Should the presence of a watermark be treated like a digital signature? These questions sit at the intersection of law and technology, and courts worldwide are just beginning to address them.

There are precedents to build on. Digital watermarks in photographs, videos, and documents have long been recognized as proof of ownership in copyright disputes. Extending that reasoning to AI models seems logical, but it demands new evidentiary standards. Judges and lawyers need to understand not just how a watermark works but how reliable and tamper-proof it is.

That said, over-reliance on these methods brings risks:

  • Poorly implemented watermarks can create security vulnerabilities or hidden backdoors.
  • False claims could arise if watermark verification methods aren't standardized.
  • Ethical issues emerge if watermarking affects model transparency, especially in public or safety-critical applications like healthcare or autonomous systems.

Challenges on the Road Ahead

  1. Fragility of Watermarks – Even slight retraining, fine-tuning, or compression can erase an embedded watermark. For legal purposes, a proof mechanism that disappears under minimal modification is unreliable.
  2. Evidentiary Standards – Courts need defined benchmarks for what constitutes acceptable "technical proof" of authorship or theft. Without it, watermarking could be dismissed as weak or inconclusive evidence.
  3. Jurisdictional Gaps – There is no international treaty or harmonized framework that governs AI model ownership. Disputes often cross borders, complicating enforcement.
  4. Moral and Policy Questions – Should AI-generated outputs or learned behaviors even be treated as property? Or is it more appropriate to protect the human effort the design, data curation, and engineering behind them?
  5. Transparency vs. Control – Overprotecting models could harm innovation. The challenge is ensuring protection without undermining open science and fair competition.

The Way Forward to a Hybrid Approach

Here's the thing: relying on law alone won't cut it. Nor will relying only on technology. The future lies in combining both.

  • For Developers: Adopt watermarking and fingerprinting responsibly, maintain clear documentation of training data and design processes, and use strong contractual safeguards like NDAs and licensing clauses.
  • For Policymakers: Recognize watermarking and behavioral verification as supporting evidence under IP statutes. Consider amendments to the Indian Copyright Act or a new "AI Assets Protection" framework that explicitly covers trained models.
  • For Courts: Build technical literacy among judges and legal professionals. Specialized IP benches should include forensic experts who can interpret digital watermarking evidence credibly.

Ultimately, protecting AI models isn't just about rewarding inventors it's about ensuring trust in innovation. If developers fear theft, innovation slows. But if protection becomes too rigid, creativity and collaboration suffer. The goal must be balance: legal clarity, technological safeguards, and ethical transparency.

Conclusion

We are entering an era where creativity is no longer purely human. Machines can now learn, adapt, and even generate content autonomously. But ownership, accountability, and recognition remain human responsibilities.

Deep learning models deserve protection not because they're machines, but because of the human ingenuity, time, and intellectual effort invested in building them. Watermarking and fingerprinting are promising first steps but they are not substitutes for legal reform.

A sustainable future for AI demands a hybrid protection ecosystem one that respects innovation, deters theft, and evolves as fast as the technology itself. Until the law catches up, it's up to policymakers, technologists, and lawyers to ensure that the "machine's mind" doesn't outpace our own sense of justice.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More