Artificial Intelligence in Legal Practice: Who’s Liable When the Lawyer is an Algorithm?
- UCL Law for All Society

- Oct 17
- 4 min read
By Zuha Malik
In mid-2025, the Solicitors Regulation Authority (SRA) authorised the United Kingdom’s first AI-driven law firm, Garfield.Law Ltd. The firm use
s a large language model to guide individuals and small businesses through the small claims and debt-recovery process, offering a lower-cost, more accessible route to justice in comparison to traditional legal advice. For the SRA, authorising such a firm reflects a deliberate effort to encourage innovation while maintaining consumer protection – a bold but carefully calculated step in an increasingly digital legal market.
The authorisation of an AI-based law firm marks a turning point for the profession. For years, tools such as Harvey, CoCounsel, and Lexis+ AI have transformed how lawyers research and review. But Garfield.Law’s model goes further by shifting AI from a supporting tool to a front-facing legal actor - one that interacts directly with clients and generates outputs capable of shaping real legal outcomes. This evolution signals a deeper redefinition of what it means to practise law in a technological era.
The Liability Question
Under current SRA rules, named regulated solicitors remain ultimately responsible for all AI outputs produced during the course of legal practice, and, therefore for any ensuing errors. Yet the Garfield.Law model complicates this principle. When advice is heavily automated and human review is minimal, the distinction between “tool” and “lawyer” becomes blurred. If a user receives flawed guidance (such as a misapplied limitation period or an incorrectly filed claim) the accountability could be dispersed across developers, data providers, supervising solicitors or even platform operators.
This uncertainty reflects broader debates across the profession. A recent LexisNexis survey found that 61% of UK lawyers now use generative AI in their daily work, but warned that firm culture and governance lag behind adoption. Only 17% of said AI usage is fully embedded within their firm’s strategy and operations, suggesting a significant gap between experimentation and regulation. Garfield.Law’s approval forces regulators and firms to confront a difficult question: when legal advice is machine-generated, and who ultimately bears the duty of care?

Transparency adds a further challenge. AI models function as “black boxes”, generating probabilistic outputs rather than reasoned judgments. Even high-quality systems can reproduce bias or ‘hallucinate’ non-existent statutes; empirical research has shown that leading legal AI tools, including Lexis+ AI, hallucinate between 17% and 33% of the time.[5] This highlights the fragility of relying on systems that, thus far, lack comprehensive explanation.
The SRA’s authorisation of Garfield.Law demonstrates cautious confidence in human oversight. In its press statement, the SRA required the firm to maintain quality checks and prohibit autonomous decision-making - noting that Garfield “will only take a step where the client has approved it.” These safeguards reflect a commitment to human review and accountability. Yet as AI adoption expands, maintaining consistent auditing and error detection will become an increasingly formidable regulatory challenge.
A Comparative Glimpse: AI and Accountability Abroad
The UK is not alone in confronting this dilemma. In the United States, a federal court sanctioned two lawyers in 2023 after they cited six fictitious cases generated by ChatGPT in a legal brief, underscoring the dangers of unverified AI outputs in professional settings[7].
Meanwhile, intellectual property disputes such as Getty Images v Stability AI highlight another side of accountability, i.e. AI systems’ acquisition and usage of the very data underpinning their ‘intelligence’. In 2024, Getty Images narrowed its UK lawsuit but continued to pursue claims of secondary infringement under sections 22 and 23 of the Copyright, Designs and Patents Act 1988. This narrowing reflects how rights-holders and regulators are adapting traditional liability frameworks to generative AI’s complex data-training models, focusing on traceability and provenance rather than mere copying.
Where Do We Go From Here?
Some have proposed “human-in-the-loop” regulation, mandating AI-generated legal advice to be reviewed and signed off by a qualified solicitor before delivery. Scholars have also called for algorithmic malpractice insurance as a regulatory tool, arguing that insurance can serve both preventative and compensatory roles by incentivising safer designs. Others suggest a regulatory sandbox model, i.e. allowing AI law firms to operate under controlled conditions while regulators monitor outcomes, which borrows from frameworks already explored in legal innovation scholarship and experiments.
For future trainees, this shift is less a threat, and more so an opportunity to redefine what “commercial awareness”. Understanding how technology changes client expectations and professional risk will become as essential as mastering black-letter law. Firms will look for lawyers able to translate code to counsel, and those who can see both the efficiency AI brings and the ethical boundaries it must respect. Learning about these developments now means being ready to work with technology.
The SRA’s support of Innovate and its openness to technology trials suggests the regulator is willing to accommodate disruption, so long as core principles of consumer protection are maintained. The next years will test whether regulation can keep pace with automation while preserving trust in the legal profession.
Bibliography:
https://www.sra.org.uk/news/news/press/garfield-ai-authorised/
https://www.artificiallawyer.com/2025/06/02/which-legal-ai-tools-are-law-firms-actually-using/
https://www.osborneclarke.com/insights/getty-images-v-stability-ai-landmark-trial-generative-ai-uk
https://jolt.law.harvard.edu/assets/articlePDFs/v35/2.-Lior-Insuring-AI.pdf
Edited by Artyom Timofeev


Comments