ARTICLE
27 June 2025

Garfield.law & The 'Robot Solicitor' Sensation

TS
Teacher Stern

Contributor

Teacher Stern, established in 1967, is a full-service commercial London law firm with expertise in real estate, litigation, and commercial services. We offer a comprehensive range of commercial legal expertise. Our commitment lies in providing flexibility, responsiveness, and personalized service to our clients.

A top-class entrepreneurial firm we have a multi-disciplined approach, introducing ideas from a new perspective. Ranked in the Legal 500, we are recognized as one of the leading law firms in the UK across various specialist practice areas. Our expertise extends to large real estate and corporate transactions, complex litigation, and capital markets work. Additionally, we specialise in sectors such as real estate, hospitality & leisure, retail, technology & media, sport, and transport.

In May 2025, legal headlines across the UK proclaimed the arrival of the country's "first robot solicitor" – a legal AI tool named Garfield.law.
United Kingdom Technology

In May 2025, legal headlines across the UK proclaimed the arrival of the country's “first robot solicitor” – a legal AI tool named Garfield.law. Described in the media as a revolutionary step toward automated justice, the reality was more modest: Garfield is a procedural software system with AI capability designed to help individuals pursue small debt claims. It doesn't think, argue, or understand the law – it simply fills out forms based on user input. At its most advanced, it can generate skeleton arguments if a claim escalates, though these require human review.

The hype around Garfield points to a deeper legal question: what happens when AI systems do become powerful enough to make or influence substantive legal decisions?

This question lies at the heart of a growing shift in legal thinking. For centuries, civil liability has been underpinned by the concept of the “reasonable person”: a hypothetical individual exercising ordinary prudence. As artificial intelligence and machine learning are deployed in high-stakes contexts, from lending to employment, this human-centric benchmark comes under strain. Algorithms don't reason. They optimise. They operate statistically, opaquely, and without moral judgement. So, what standard should apply when an AI system causes harm

The “Reasonable Algorithm” Standard

The reasonable person standard asks what a prudent individual would have done. In contrast, AI lacks awareness, discretion, or empathy. Machine learning models may make decisions that are statistically sound but unjust or inexplicable in specific cases.

This is where the concept of the “reasonable algorithm” comes in. Rather than treating machines as flawed humans, the law could hold them to standards appropriate to their design and use. Were developers careful in curating training data? Are the results explainable? Was the AI deployed responsibly, and were foreseeable risks mitigated? This reframes legal responsibility around the conduct of those who build and implement AI systems.

Garfield.law: A Modest System, A Larger Lesson

Garfield.law represents what many legal tech tools currently are: narrow, procedural engines built for administrative efficiency. It automates tasks like form-filling and deadline tracking. It doesn't, and cannot, assess hardship or exercise discretion. This uniformity may promote impartiality, but not always justice. For example, Garfield treats a £5,000 debt the same whether the debtor is a multinational corporation or a single parent on benefits, unless a human user steps in.

This is where the real lesson lies. Garfield's simplicity minimises the risks of bias from unsupervised learning, but it also hints at the complexity ahead. Future AI systems may not be rule-based but self-adjusting and autonomous. When harm arises not from a static template but from dynamic optimisation, the law will need to evaluate whether those behind the system acted with reasonable foresight and care.

Preparing for What's Next

As legal AI grows more powerful, the reasonable algorithm standard offers a framework for accountability. It avoids the trap of anthropomorphising AI and instead asks:

  • Did those responsible act with diligence?
  • Were risks foreseeable?
  • Was the deployment context suitable?
  • Were proper oversight mechanisms in place?

Regulators are already laying foundations. In the EU, the Product Liability Directive shifts the burden onto AI developers to prove their systems are not defective. In the UK, agencies like the ICO and FCA have made it clear that algorithmic decision-making does not excuse firms from anti-discrimination or consumer protection obligations. Industry bodies are also developing guidance around fairness, transparency, and auditability.

Conclusion

The public's response to Garfield.law was shaped by the fantasy of robotic lawyers, but the system's real significance lies in the legal questions it provokes. Garfield doesn't reason – it executes. As systems evolve and edge closer to true decision-making power, legal institutions must think critically about how to balance technological innovation with responsibility. At Teacher Stern, we've already started.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More