
4 February 2026
How can in-house lawyers use Generative AI responsibly and effectively?
In previous articles, we examined whether privilege can be lost by inputting information into a Public AI System (defined below), and whether in-house counsel can use Generative AI systems in a way that creates or maintains privilege. In this final article, we identify key points of best practice which in-house teams should consider implementing.
What is the general position?
In-house counsel should be able to use Generative AI Systems to formulate and communicate legal advice in a way that maintains privilege. We consider that AI Systems should be treated like a “subordinate” of the lawyer, much like a trainee solicitor, pupil barrister, or paralegal working under the “direction and supervision” of the lawyer.
For legal advice generated with the assistance of an AI System to be privileged and to retain privilege, and for in-house lawyers to comply with their regulatory duties, it is imperative that such tools are used responsibly and with the relevant guardrails in place.
What points of best practice can I implement?
- In any circumstance, prior to engaging a Generative AI System, consider whether the use of a Generative AI System is:
- necessary;
- desirable;
- expedient; and
- in the best interests of the user and your organisation.
- Public AI Systems (ie free or public versions of popular AI Systems, where data may be used to train the underlying model for wider use, whereby users are typically not able to negotiate the terms of service or licence) should not be used to generate or assist with generating legal advice. Such activities should be reserved for Bespoke AI Systems, ie AI Systems which offer bespoke contractual protection which may include terms that limit or prohibit the AI System’s ability to use inputs to improve or otherwise train its model. See section 5 of our full report.
- Confidential and privileged information should not be inputted into a Public AI System. If a lawyer (whether in-house or external) does so without the express permission of their client, they may be in breach of their duties. See section 5 of our full report.
- Where Generative AI Systems are used to assist with the production of legal advice, such use should always be under the direction and supervision of a qualified lawyer.
- Consider whether the legal task is appropriate for an AI System.
- Lawyers, as regulated individuals, should stay abreast of any regulatory and legal developments in this area, in particular as regards case law and potential reforms to the Civil Procedure Rules in the context of litigation. As the Law Society summarised: “Even if outputs are derived from generative AI tools, this does not absolve you of legal responsibility or liability if the results are incorrect or unfavourable”.1 Recent case law also confirms this position.2
- Where the context requires, consider keeping a record of AI use, including of prompts, to demonstrate the degree of supervision and direction exercised in the production of legal advice.
- Ensure non-lawyer employees in your organisation are aware that any legal advice sought or given from an AI System will not be privileged absent the involvement or a lawyer (as outlined above) or a substantive change in law.
- Where possible, create, disseminate and supervise consistent use policies on AI Systems internally.
1The Law Society, Generative AI, dated 20 May 2025.
2Ayinde v Haringey (2025) EWHC 138.
.jpg?impolicy=m&im=Resize,width=3840)