
5 January 2026
Should I input privileged advice into a public AI tool and can I maintain privilege when doing so?
No – there is a risk that the court would treat that advice as published to the world and it may amount to a breach of any applicable regulatory duties. In practice, doing so may only amount to a theoretical rather than actual loss of confidentiality but we advise exercising caution.
What is the legal position?
For a communication to be protected by privilege, it must be (and remain) confidential.1 It follows that information which is “public property and public knowledge” cannot be considered confidential.2 The confidentiality of communications rests on the interaction between the persons involved in the communications and the use of the information shared within the communication itself.
How does that apply to AI Systems?
The answer depends, in part, on the AI System used, as well as terms of the agreement between the user and the AI System:
- Public AI Systems: Free versions of popular AI Systems such as ChatGPT, where data may be used to train the underlying model for wider use, and which may process and store inputs depending on their standard terms of service. Users are typically unable to negotiate the terms of service or licence.
- Bespoke AI Systems: AI Systems which offer bespoke contractual protection around data privacy, confidentiality, and security (among other things). Such protection can be tailored depending on the arrangement between the provider and the user and may include terms which limit or prohibit the AI System’s ability to use inputs to improve or otherwise train its model.3
The terms of a Public AI System may promise or guarantee their user privacy and as such it may be tempting to input otherwise confidential and/or privileged information into a Public AI System. However, while the courts are clear that confidentiality is not a binary concept,4 confidentiality and privacy should not be used interchangeably, and as such, the promise of “privacy” does not guarantee confidentiality: "it is mistaken to describe the reasonable expectation of privacy as being a touchstone of confidentiality".5In other words, confidential information may be private, but the fact of privacy in and of itself does not make an input or output confidential. The extent to which information inputted into a “Generative AI System” (ie a form of AI System which specifically generates new content, including text, images, sounds and computer code in response to user prompts1, creating new data that has similar characteristics to the data it was trained on, resulting in outputs that are often indistinguishable from human-created media)2 loses its confidential nature (and therefore the potential protection of privilege) is similarly a matter of fact and degree. It follows that complete disclosure of a privileged communication to a Generative AI System is more likely to result in a loss or waiver of privilege than a reference to the same.
The April 2025 AI Guidance for the English judiciary contains a section relating to upholding confidentiality and privacy for the use of Generative AI System within the legal context. Notably, the guidance states that "any information that you input into a public AI chatbot should be seen as being published to all the world”, thereby constituting a loss of confidentiality.6 Although this guidance is not law, our view is that a judge in this jurisdiction (themselves subject to the guidance in their use of AI) is likely to start from the position that inputs into a Public AI System have been “published to all the world”, and are unlikely to be confidential.
Are there any exceptions?
Case law7 has previously held that just because information has become public, that does not mean that it has actually been accessed by the public at large. Confidential information inputted into a Public AI System may not lose its confidential character (and therefore, its ability to benefit from the protections of LPP), as the inputs may not be accessible to other users of the system, or only accessible in very limited circumstances. In other words: confidentiality is theoretically but not practically lost. There may be a public policy argument to support the position that the mere fact of inputting confidential information into a Public AI System does not mean it is necessarily “generally accessible” and therefore, that it has not lost its confidential character.8
However, absent a test case on the issue, it is feasible that an “intense search” by use of detailed and accurate prompts could result in certain otherwise confidential information becoming public. As AI systems continue to evolve in sophistication and accessibility, the risk of unintended disclosure of confidential (and/or sensitive) information becoming available to the wider world also increases. This risk is further compounded by the possibility that AI systems could be deliberately manipulated or exploited by malicious actors seeking competitive advantage or aiming to extract confidential data for harmful purposes eg by attempting to probe a Generative AI System to surface sensitive business information. Our view is that these developments highlight the need for ongoing vigilance and the implementation of robust policies and safeguards to protect confidential information.
Given the substantial risks associated with inputting information into a Public AI System which cannot guarantee the confidentiality of the information it processes, our view is that the prudent approach is to assume that any information entered into a Public AI System will lose confidentiality, unless and until the Court comments on the extent to which confidentiality is lost. Solicitors – including in-house counsel and foreign lawyers who are registered with the SRA – are under a duty to keep their clients' information confidential.9 Inputting confidential information into a Public AI System without a client's explicit consent may therefore also constitute a breach of a lawyer's regulatory duties.
What are some points of best practice?
- Confidential and privileged information should not be inputted into a Public AI System. If a lawyer (whether in-house or external) does so without the express permission of their client, they may be in breach of their duties.
- Lawyers, as regulated individuals, should stay abreast of any regulatory and legal developments in this area, in particular as regards case law and potential reforms to the Civil Procedure Rules in the context of litigation.
- Where possible, create, disseminate and supervise consistent use policies on AI Systems internally.
.jpg?h=520&iar=0&w=1910)