tunnel

8 December 2025

Should an AI system be treated like a trainee lawyer or paralegal?

Yes – our view is that, in the context of privilege, AI Systems should be treated like a “subordinate” of the lawyer, much like a trainee solicitor, pupil barrister, or paralegal working under the “direction and supervision” of the lawyer. Supervision of an AI System (defined below) by a qualified lawyer may therefore allow the AI System's output – in certain circumstances – to attract privilege.

 

What is an AI System?

An “AI System” is a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. An AI System’s ability to make inferences, based either on explicit, human instruction or from objectives which it is capable of implying, demonstrates its capacity for autonomous action.

 

Who is a lawyer?

There has “never been any serious doubt that privilege is confined to communications with professional lawyers”.1 Lawyers are (in broad summary):

  • qualified solicitors or barristers in independent practice, subject to a regime of professional ethics and discipline, instructed in a professional capacity. It is irrelevant whether the lawyer is self-employed or works at a law firm;2
  • in-house lawyers;3
  • lawyers who are not barristers or solicitors but provide reserved legal activities as an authorised person (ie are approved by a regulator), for example a conveyancer or probate service provider;4 and
  • foreign lawyers.5

While it is well-established that legal advice privilege (LAP) is contingent on the involvement of a lawyer of the kind described above,6 the widespread introduction of AI Systems into business practices raises the question of whether advice provided by a Generative AI System can be privileged.

 

Can an AI System ever be regarded as a lawyer?

Clearly, an AI System is not a “lawyer” and our view is that it could not be considered a “lawyer” under the current law. At the time of writing, there is no English case law that directly addresses the question of AI and LPP. However, case law exists on legal advisers’ duties when using AI to generate court documents, which offers parallels on how the court may approach the role and status of an AI System when used by the lawyer, and provide clues as to what the future might hold.

In the recent case of Ayinde v Haringey,7 which concerned the submission of court documents by lawyers containing Generative AI hallucinations, the Court compared the duty of a lawyer to check the output of a Generative AI System to: “the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister (…), or on information from an internet search”.8 The comparison between the output from an AI System and an internet search has serious limitations, given the sophistication of the systems we are discussing. However, the analogy to the work of a trainee or pupil raises the question of whether the output of AI Systems, and specifically, Generative AI and Agentive AI9 can, should, or could in future, be treated akin to that of others under a lawyer’s supervision for the purposes of LAP.

 

Can an analogy be drawn between trainees, paralegals and AI Systems?

Trainees, paralegals or locums, whether in private practice or in-house, are not qualified lawyers, but their advice, if given under the direction10 and supervision of a qualified lawyer or a law firm will attract privilege. The test, summarised in Recovery Partners GP Ltd v Rukhadze11 is, “broadly” (…) “whether the advice is advice is given by (or under the direct control of…) a legally qualified person acting in a professional capacity”.12 Under the “direct control” test, legal advice generated by or (in large part by) an AI System could be capable of protection under LAP, much like the first drafts generated by trainee solicitors or legal assistants. That analogy also corresponds to the way many Generative AI and Agentive AI Systems are marketed to their end users; the tool promises to be like an “assistant”; “a junior member of the team”, or as described in Ayinde v Haringey, “like a trainee”.

Given the rate at which AI Systems are being adopted and integrated in a legal context, it seems likely that the law would accommodate analogies between AI Systems and non-legally qualified employees working under the supervision of a lawyer. However, such analogies should be approached with caution absent a test case: the obvious difference between AI Systems, secretaries, personal assistants, juniors, trainees, locums, paralegals, and agents – is that the latter are legal persons. At least some of them are also capable of regulation13 and subject to the ethical and compliance standards expected of their regulators. An integrated AI System – at present – is not.

 

What are some points of best practice?
  • Thoroughly review, verify, and, where appropriate, amend legal advice generated by an AI System, and issue final work product in the name of the lawyer or employer who has checked output.
  • Treat AI Systems as a subordinate of a lawyer, as opposed to a substitute for a lawyer.
  • Remain cognisant of professional obligations when adopting the work product as developed by an AI System.

1Thanki and Oppenheimer, The Law of Privilege (Fourth Edition), Oxford University Press, 2025, para. 1.46.
2Thanki and Oppenheimer, The Law of Privilege (Fourth Edition), Oxford University Press, 2025, para. 1.48
3Thanki and Oppenheimer, The Law of Privilege (Fourth Edition), Oxford University Press, 2025, para. 1.53.
4Thanki and Oppenheimer, The Law of Privilege (Fourth Edition), Oxford University Press, 2025, para. 1.49.
5Thanki and Oppenheimer, The Law of Privilege (Fourth Edition), Oxford University Press, 2025, para. 1.52.
6Passmore, Privilege (Fifth Edition), Thomson Reuters, 2024, para. 1-003.
7See DLA Piper, Judicial guidance on AI: A timely prompt from the English and Australian courts, dated 10 July 2025, accessed 25 September 2025
8Ayinde v Haringey (2025) EWHC 1383 at (8).
9We define "Agentive AI" as an AI System that can accomplish a specific goal with limited supervision. Unlike traditional AI Systems, which operate within predefined constraints and require human intervention, Agentive AI exhibits autonomy, goal-driven behaviour and adaptability. The term “agentive” refers to these models’ agency, or, their capacity to act independently and purposefully. Agentive AI can use Generative AI as a component, for example, when drafting an email. By contrast, Generative AI can produce content without taking action (especially independent action). It may consist of AI Agents—machine learning models that mimic human decision-making to solve problems in real time. In a multiagent system, each agent performs a specific subtask required to reach the goal and their efforts are coordinated through AI orchestration (Adapted from the definition given in IBM, The 2025 Guide to AI Agents, , accessed 16 July 2025. See also IBM, What is Agentic AI?, , accessed 8 September 2025).
10Thanki and Oppenheimer, The Law of Privilege, (Fourth Edition). Oxford University Press, 2025, para. 1.51.
11Recovery Partners GP Ltd v Rukhadze (2021) EWHC 1621 (Comm).
12Recovery Partners GP Ltd v Rukhadze (2021) EWHC 1621 (Comm) at (34)-(40).
13Passmore, Privilege, (Fifth Edition), Thomson Reuters, 2024, para. 1-390.

Print