ARTICLE
5 April 2025

AI Is Coming For Our Homes (And I Look Forward To It)

MC
Marks & Clerk

Contributor

Marks & Clerk is one of the UK’s foremost firms of Patent and Trade Mark Attorneys. Our attorneys and solicitors are wired directly into the UK’s leading business and innovation economies. Alongside this we have offices in 9 international locations covering the EU, Canada and Asia, meaning we offer clients the best possible service locally, nationally and internationally.
The pace at which large language models (LLMs) have been advancing in the past couple of years is remarkable. We are now seeing various LLMs come out that can not only reply to text-based prompts...
United Kingdom Intellectual Property

The pace at which large language models (LLMs) have been advancing in the past couple of years is remarkable. We are now seeing various LLMs come out that can not only reply to text-based prompts, but also interpret pictures in the prompts, generate images and videos, and so much more.

However, most of these models have a size of hundreds of billions of parameters, so they require significant processing and memory hardware to run. This means that consumers are still stuck accessing LLMs from a server with enough processing horsepower to handle these models. Or does it?

What has caught my eye recently is the incredible performance that can be achieved by distilling the LLMs into much smaller models. Whilst the performance of the smaller models is generally lower than that of the original LLMs, they are still impressively performant for their size. In fact, it is already possible to find smaller models which can be run locally using regular consumer-grade hardware.

Does this mean we could soon start seeing a surge of locally run LLMs? I don't know. But with the recent push for hardware with improved capabilities for handling AI workloads, it wouldn't surprise me if we soon start seeing more and more locally run models. We are already seeing many software providers introducing models into their software which can make use of these capabilities.

Anyway, back to work. What does this mean for patenting AI-related inventions? You see, many of these small models are trained by distilling the knowledge of a larger "teacher" model into a smaller "student" model. Currently, it is common for patentees to try and keep the architecture of a model secret to prevent others from copying the model. This can be relatively straightforward if the model is only run in their own servers or facilities. However, it will probably be much more difficult to keep the architecture of a locally run model secret. So, we could start seeing an increased interest in patent protection directed at the architecture of these smaller student models intended to be used locally.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More