Software patents in India sit behind a familiar gate: Section 3(k) of the Patents Act excludes a "computer programme per se" from patentability. But that gate isn't locked. Courts have made clear that when software solves a technical problem and claims are drafted to the technical contribution, protection can pass through.
That approach now appears in the Indian Patent Office's Draft Computer-Related Invention (CRI) Guidelines, March 2025, which expressly discuss AI, machine-learning models and claim formats. The draft reiterates that claims directed merely to algorithms are barred, but inventions demonstrating technical effect/technical contribution—for example, improved security, efficiency, robustness, or resource use—are not. This is pivotal for AI security features such as watermarking and fingerprinting of models or outputs.
At the same time, the global toolbox for proving model ownership has matured fast. Output watermarking (embedding statistical patterns in generated text/images), model watermarking (signals in weights/activations), and model fingerprinting (robust black-box tests that identify a model even after fine-tuning or compression) now form a practical taxonomy for IP provenance and license enforcement. Each technique has different attack surfaces and evidentiary value, which matters when you're pleading technical effect in prosecution or proving breach in court.
Explore More: Intellectual Property
1.Patent Eligibility in India for Software, AI & ML
Section 3(k) excludes a "mathematical method, business method, computer programme per se or algorithm." But courts and the Patent Office don't treat this as a blanket ban when the claims show a technical effect/technical contribution beyond the program itself. The Delhi High Court's Ferid Allani decision directed examiners to look at substance—does the invention solve a technical problem and yield a technical effect? If yes, it can clear Section 3(k).
Where the line is being drawn—recent
practice. In the wake of Ferid Allani, a string
of Delhi High Court decisions has reinforced that
software-implemented inventions can be patentable when they improve
system operation, security, resource use, network performance, or
similar technical parameters. Conversely, claims that merely
automate a business rule on general-purpose hardware are still out.
Several 2023–2024 rulings (including Microsoft
appeals and the BlackBerry matters) explain how to assess
"technical contribution" and caution against refusals
that rely only on the label "computer program".
IPO's current compass—Draft CRI Guidelines, March 2025
The Controller's draft CRI Guidelines (now in public consultation) explicitly address AI/ML and restate the test: exclusions under Section 3(k) apply to algorithmic/abstract claims, but inventions demonstrating technical effect are not excluded merely because software is involved. The draft encourages examiners to evaluate claimed technical features, claim construction, and the problem–solution framing rather than hunting for a "novel hardware" fig leaf. Stakeholder comments filed on the draft show consensus around this direction.
What typically still fails under Section 3(k):
- Pure algorithms or model math (loss functions,
gradient schedules) claimed in the abstract, without anchoring to a
technical effect on a computer/system.
- Business methods or policy rules merely executed
by a model or script (e.g., dynamic pricing, ad targeting) on
off-the-shelf infrastructure.
- Claims that recite a result ("improved
recommendations") without the technical means achieving
it—e.g., no specifics of system architecture, data path,
memory/compute handling, or network behavior.
What is increasingly allowed to pass:
- AI/ML inventions that improve the operation of
the computer or network itself—reduced latency, lower memory
footprint, better cache/IO scheduling, robustness to packet loss,
hardened security pathways. Courts treat these as technical
effects.
- Security/Trust features with measurable
technical outcomes—e.g., watermarking/fingerprinting embedded
at the model or output layer that increases tamper-resistance or
provenance verification accuracy under specified attack models;
evidencing concrete system-level benefits (throughput preserved,
false-positive rates reduced, integrity checks at inference).
- Claims drafted to the system/process
architecture (data ingestion → feature transforms → model
execution → verification module) showing how the method
achieves the effect, not just that it does. Recent Delhi HC
guidance stresses evaluation of the technical contribution over
labels.
2. Patent Claiming for AI Watermarking & Fingerprinting
Why watermarking and fingerprinting matter.
For AI developers, the commercial risk isn't just theft of code—it's someone running your trained model as their own. Watermarking (embedding imperceptible identifiers in model weights, activations, or outputs) and fingerprinting (black-box challenge–response protocols to recognise a model) are now widely discussed in research and policy circles as mechanisms to establish provenance. From a legal strategy perspective, the question is whether these techniques can be patented in India, and if so, how.
Positioning them for Section3(k)
As noted earlier, the line is "technical contribution." A watermarking invention shouldn't be pitched as "an algorithm for embedding identifiers" alone; it should be described as a system and method that technically improves model integrity and verifiability. For example:
- Framing the watermarking layer as a security
subsystem that reduces false provenance claims and enables reliable
ownership verification.
- Demonstrating robustness against attacks like
pruning, fine-tuning, and quantisation, which is a concrete
technical effect.
- Claiming integration with deployment pipelines
(APIs, edge inference devices) showing reduced overhead or
preserved throughput while maintaining watermark integrity.
Exemplar claim approaches under the CRI Guidelines.
- System claims — reciting
components such as "a data pre-processor, model encoder,
watermark embedding module, verifier engine" tied to technical
functions.
- Method claims —
structured as steps that yield measurable effects: "embedding
a signal in activation distributions; verifying by statistical
detection with bounded false-positive rates under
compression."
- Computer-readable medium claims
— often resisted by IPO examiners, but may pass if couched in
system-level architecture terms and grounded in technical
effect.
3.When Patents Aren't Enough — Trade Secrets, Licensing Models, and Hybrid Protection for AI/ML Innovations in India
The limits of patenting
Not every AI invention survives the Indian Patent Office's scrutiny. Core model architectures, loss functions, or algorithmic tweaks often look too "abstract" under Section 3(k). Even when granted, patents disclose technical detail to the public—a trade-off some developers may not want, particularly in fast-moving AI sectors where model weights or training data are the crown jewels.
Trade secrets as a shield
In India, there is no dedicated trade secrets statute, but courts enforce confidentiality under contract law, equity, and tort principles. NDAs, restrictive covenants, and confidentiality clauses in employment agreements are the backbone. Case law (e.g., Bombay Dyeing v. Mehar Karan Singh, Delhi HC, 2019) shows courts are willing to injunct ex-employees or partners from misusing proprietary algorithms or customer data when contractual safeguards exist. For AI companies, this means:
- Keep model weights, training data, and internal
tooling confidential through structured secrecy protocols.
- Use tiered access controls and technical
measures (encryption, audit trails). Courts give more weight to
claims of misappropriation when robust internal safeguards
exist.
- Embed contractual penalties in license agreements for disabling or bypassing watermark/fingerprint modules.
Licensing models in practice
In AI commerce, licensing often matters more than the patent grant itself. Developers typically monetise via:
- API licences — users
access the model through endpoints; watermark/fingerprint checks
can be baked into the API call/response cycle.
- On-prem deployments —
particularly for regulated sectors (banking, defence), where
contracts must clearly prohibit reverse engineering and impose
compliance checks.
- Hybrid licences —
combining IP rights with technical verification; e.g., watermark
detection logs feeding into audit reports required under the
agreement.
Indian contract law supports these structures, but remedies hinge on clear drafting. Courts uphold restrictive licensing if the clauses are reasonable, precise, and not contrary to Section 27 of the Indian Contract Act (which bars absolute restraints on trade). The safe zone: limited, necessary restrictions tied to protection of confidential know-how.
Conclusion
The protection of AI/ML innovations in India requires a nuanced approach that acknowledges both the promise of patents and the practicality of trade secrets and licensing. Section 3(k) of the Patents Act still narrows the field, but recent jurisprudence and the Draft CRI Guidelines show a genuine opening for inventions that deliver measurable technical effects—particularly in domains like watermarking, fingerprinting, and model integrity verification.
Globally, the taxonomies of watermarking and fingerprinting are maturing. Bringing them into Indian practice means drafting claims that highlight system-level benefits while structuring agreements that anticipate enforcement challenges in Indian courts. The real opportunity lies in hybrid strategies: patent what you can, protect the rest with contracts and secrecy, and design licensing frameworks that integrate technical verification.
In short, India's patent regime is no longer a dead end for software and AI. With the right framing—technical contribution, layered protection, and licensing discipline—AI innovators can convert their research into enforceable rights, resilient business models, and defensible market positions.