motion

13 October 2025

Is legal advice developed by or with an AI System legally privileged?

The English courts have not, as at the date of this report, tested the extent to which the law of privilege extends to “legal” content generated by AI Systems (specifically Generative AI Systems such as large language models). This report considers the current law of privilege and explores how, absent a test case or legislative reform, the existing legal framework could protect privilege in communications created by, or with the assistance of, Generative AI Systems. Click the headings below to explore bite-sized insights, or download the full report via the green button. If you have any questions, please contact the authors.

Highlights from the report

The English courts have not, as at the date of this article, tested the extent to which the law of privilege extends to “legal” content generated by AI Systems (specifically Generative AI Systems such as large language models). This article considers the current law of privilege and explores how, absent a test case or legislative reform, the existing legal framework could protect privilege in communications created by, or with the assistance of, Generative AI Systems.

In our view:

  • There is no standalone “AI Privilege”: Legal advice generated by an AI System and provided directly to a non-lawyer is not capable of being privileged, as the AI System is not a lawyer.
  • There is a real risk that inputting privileged legal advice into Public AI Systems will result in a loss of confidentiality and therefore a loss of privilege. Although current judicial guidance suggests that privilege will almost certainly be lost in these circumstances, we suggest that the application of the existing rules of privilege could lead to different outcomes. However, until this has been tested in court, the safest option is to avoid placing privileged material into Public AI Systems. Lawyers, including in-house counsel, should be able to use Generative AI Systems to formulate and communicate legal advice in a way that maintains privilege. We consider that AI Systems should be treated like a “subordinate” of the lawyer, much like a trainee solicitor, pupil barrister, or paralegal working under the “direction and supervision” of the lawyer.
  • For legal advice generated with the assistance of an AI System to be privileged and to retain privilege, and for lawyers (in-house and in private practice) to comply with their regulatory duties, it is imperative that such tools are used responsibly and with the relevant guardrails in place.
  • How the courts will treat AI in the context of privilege remains to be seen. While we await a test case or a change in the law, in-house counsel and their advisers should look to the analogies from existing case law as well as current judicial and industry guidance to benefit from the significant potential of AI, without risking or waiving their client’s fundamental human right to privilege.

One of the difficulties of discussing AI is the lack of consensus of what AI means.

We adopt the following definitions:

AI System: A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.1 An AI System’s ability to make inferences, based either on explicit, human instruction or from objectives which it is capable of implying, demonstrates its capacity for autonomous action.

Generative AI:2 A form of AI System which specifically generates new content, including text, images, sounds and computer code in response to user prompts.3 It creates new data that has similar characteristics to the data it was trained on, resulting in outputs that are often indistinguishable from human-created media.4 Generative AI is the umbrella term for AI Systems capable of generating new content.

Large Language Models (LLM): AI models trained on text data to understand and generate human-like language. A subset of Generative AI focused specifically on language.

Agentive AI: An AI System that can accomplish a specific goal with limited supervision. Unlike traditional AI Systems, which operate within predefined constraints and require human intervention, Agentive AI exhibits autonomy, goal-driven behaviour and adaptability. The term “agentive” refers to these models’ agency, or, their capacity to act independently and purposefully.Agentive AI can use Generative AI as a component, for example, when drafting an email. By contrast, Generative AI can produce content without taking action (especially independent action). It may consist of AI Agents—machine learning models that mimic human decision-making to solve problems in real time. In a multiagent system, each agent performs a specific subtask required to reach the goal and their efforts are coordinated through AI orchestration.

  • Prompt: An input or instruction given to an AI System which will generate a response or result. Typically, a prompt is in the form of text, but many AI Systems will now accept voice prompts.7
  • Output: Any response, in whatever format, generated by an AI System in response to a prompt.
  • Black Box (often referred to as opacity): An AI System whose internal workings are a mystery to its users. Users can see the System’s inputs and outputs, but they cannot see what happens within the AI tool to produce those outputs (...). Many of the most advanced machine learning models available today, including LLMs (...) are black box AI (Systems). These artificial intelligence models are trained on massive data sets through complex deep learning processes, and even their own creators do not fully understand how they work. (...). It is important to note that the Black Box characterisation applies equally to AI Systems accessed through specific software interfaces (such as AI copilots or assistants integrated within software—for example, an AI System embedded in an email platform to assist with drafting emails). The creators of these tools do not intentionally obscure their operations. Rather, the deep learning systems that power these models are so complex that even the creators themselves do not understand exactly what happens inside them.8

While it may appear that Public AI Systems could resolve this Black Box issue, given users ostensibly have access to the entire platform, this is a false assumption: the scale and complexity of Generative AI Systems means that no user can sensibly understand or intuit how a given input will result in activations within the AI model, or accurately predict a given output. The presence of randomising elements (such as the temperature function, which influences the creativity of the output) further reduces the likelihood of predicting outputs from inputs. Consequently, even open source AI Systems must be regarded as Black Boxes in this context.

The EU AI Act attempts to tackle the concerns relating to AI Systems acting as a Black Box. It does this by introducing (for certain types of AI Systems) a requirement for transparency; “AI Systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI System, as well as duly informing deployers of the capabilities and limitations of that AI System and affected persons about their rights”.9

The rollout and rapid development of Agentive AI means it is ineffective to rely, unquestioningly, on a lawyer’s “input” into current AI Systems to argue that that the “output” will necessarily be protected by LPP. As the Law Commission’s recent discussion paper summarised:

  • versions of Agentive AI already exist and are available to use;
  • it is highly likely that AI agents will begin to interact with each other directly, with no human oversight; and
  • the rapid ability of AI Systems to adapt means that, as well as presenting real opportunities to improve the service delivered by lawyers, those very AI Systems (and particularly, Agentive AI) come with risk.

The Law Commission cites studies of AI Systems developing the ability to “scheme”, such as strategically introducing mistakes into responses and disabling oversight systems.10 This risk creates the possibility that whilst Agentive AI is capable of acting autonomously in a manner which could create or maintain privilege it could also waive privilege over communications containing legai advice.

Finally, as is clear from the definitions above, legislators, legal industry bodies, and developers clearly and deliberately ascribe human qualities to AI Systems. The recent Law Commission Discussion Paper posits that, while it may seem “futuristic”, AI could, in future be given separate legal personality. The Law Commission acknowledges that establishing legal personality is complex. The following are examples of non-human entities which have been granted a form of legal personality:

  • a robot in Saudi Arabia;
  • a temple in India;
  • a river in New Zealand; and
  • an AI System in the Tokyo Shibuya district (which was granted residency).11

Our view is that the prospect of AI Systems attaining legal personality is some way off, but the proposition should remain under review. If AI Systems were to attain legal personality in this jurisdiction, this would almost certainly require Parliament to intervene. It would also necessarily change the analysis of privilege below, which is based on analogies with existing (human-centric) case law.

 

There is, at present, no case law or regulatory framework which directs how AI Systems should be treated for the purposes of LPP. Our view is that a court is likely to apply existing analogies from case law to determine whether legal advice generated by or with the assistance of AI can be privileged. The case law illustrates that future developments could take multiple directions and may be subject to appeal. As regards the potential or AI Systems to give privileged legal advice without the supervision of a lawyer, this would most likely require an Act of Parliament.

In the meantime, the following points of best practice may ensure that responsible use of AI Systems in a legal context attracts and retains the protections of privilege:

  • In any circumstance, lawyers and lay clients should consider whether the use of a Generative AI System is:
    • necessary;
    • desirable;
    • expedient; and
    • in the best interests of the user and their organisation, prior to engaging a Generative AI System.
  • Public AI Systems should not be used to generate or assist with generating legal advice. Such activities should be reserved for Bespoke AI Systems.
  • Confidential and privileged information should not be inputted into a Public AI System. If a lawyer (whether in house or external) does so without the express permission of their client, they may be in breach of their duties.
  • Where Generative AI Systems are used to assist with the production of legal advice, such use should always be under the direction and supervision of a qualified lawyer.
  • Consider whether the legal task is appropriate for an AI System.12
  • Lawyers, as regulated individuals, should stay abreast of any regulatory and legal developments in this area, in particular as regards case law and potential reforms to the Civil Procedure Rules in the context of litigation. As the Law Society summarised: “Even if outputs are derived from generative AI tools, this does not absolve you of legal responsibility or liability if the results are incorrect or unfavourable”.13 That is becoming increasingly clear on the case law too and should also be seen as analogous when it comes to protecting privilege. Where the context requires, consider keeping a record of AI use to demonstrate the degree of supervision and direction exercised in the production of legal advice.
  • Ensure non-lawyer employees are aware that any legal advice sought or given from an AI System will not be privileged absent the involvement or a lawyer (as outlined above) or a substantive change in law.
  • Where possible, create, disseminate and supervise consistent use policies on AI Systems internally.

1Regulation (EU) 2024/1689 dated 13 June 2024 (EU AI Act), Article 3(1). Emphasis added.
2We have specifically defined Generative AI, as an AI System does not necessarily need to generate new content.
3Adapted from the definition given in Courts and Tribunals Judiciary, Artificial Intelligence in (AI) Guidance forJudicial Office Holders, dated 14 April 2025, accessed 16 July 2025.
4Adapted from the definition given by the Alan Turing Institute, Data Science and AI Glossary, accessed 17 July 2025. Emphasis added.
5Adapted from the definition given in IBM, The 2025 Guide to AI Agents, , accessed 16 July 2025.
6IBM, What is Agentic AI?, accessed 8 September 2025.
7Adapted from the definition given in Courts and Tribunals Judiciary, Artificial Intelligence (AI) Guidance for Judicial OfficeHolders, dated 14 April 2025, accessed 16 July 2025.
8IBM, What is black box artificial intelligence (AI)?, What is black box artificial intelligence (AI)?accessed 23 July 2025. Emphasis added.
9Regulation (EU) 2024/1689 dated 13 June 2024 (EU AI Act), Recital 27.
10Law Commission, AI and the Law: A Discussion Paper, 2025, pp. 8-9.
11Law Commission, AI and the Law: A Discussion Paper, 2025, pp. 23-24.
12See also: The Law Society, Generative AI, dated 20 May 2025, accessed 25 September 2025.
13The Law Society, Generative AI, dated 20 May 2025, accessed 25 September 2025.

Loading...

Contacts

Print