10 July 202511 minute read

Judicial guidance on AI: A timely prompt from the English and Australian courts

Recent judgments both in England and Wales and in Australia have highlighted the dangers of over-reliance on AI.

In light of these judgments, the Australian courts have started publishing detailed guidance on the use of AI. We can expect this to follow shortly in England and Wales, as policymakers reflect on some striking recent cases.

In all jurisdictions, the need for transparency on AI use is key, with the courts emphasising the importance of checking the output of generative AI for accuracy. Whilst lawyers and in-house counsel will be taking note, the courts guidance has broader relevance to anyone seeking to rely on AI in a professional environment.

 

Guidance from England and Wales

The High Court in London recently heard two cases listed where lawyers put false information before the court having (potentially or actually) relied on cases, citations and quotations generated by AI (Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank)1.

The cases were heard together under the courts inherent jurisdiction to regulate its own procedures and enforce lawyers’ duties to the court (R (Hamid) v SSHD)2.

The judgment of Dame Victoria Sharp sets out practical guidance on the use of AI for all members of the legal profession. The case serves as a stark reminder of the potential consequences of getting it wrong.

Ayinde

In Ayinde the claimants barrister prepared written grounds for judicial review citing five cases that did not exist.

When the defendants solicitors raised concerns that they could not find the cases, the claimants barrister prepared a response dismissing them as mere cosmetic errors. The claimants solicitor (who reviewed and approved the response) did not check the authorities, and had not appreciated they were fake.

Conversely, the defendants solicitors realised the cases were fake, and obtained a wasted costs order against the claimant.

The barrister failed to provide a coherent explanation for what had actually happened, and was referred to a divisional court for further scrutiny. She ultimately admitted acting negligently and unreasonably, but denied using AI or intending to mislead the court.

Dame Victoria Sharp disagreed with the barrister, finding that she had either cited fake cases deliberately, or had used generative AI to produce her list of cases (or to draft her work).

The court decided not to initiate contempt proceedings (even though the legal test had been met). Dame Victoria Sharp noted that the barrister was very junior, and had been operating beyond her level of competence. Questions about the potential failings of her supervisors (and indeed how the fake cases came to be) could not be easily determined in summary contempt proceedings.

Instead, the court decided that a public admonition would suffice, having already referred the barrister to the Bar Standards Board for investigation. Meanwhile, her instructing solicitor was referred to the Solicitors Regulation Authority for failing to take adequate steps, having been notified that the cases could not be found.

Al-Haroun

In Al-Haroun the claimant prepared a witness statement citing eighteen cases that did not exist, and several other cases that did exist but which did not support the propositions for which they were cited.

The claimants solicitor then incorporated the same citations into his own witness statement without verifying his clients legal research.

The court described the solicitors failure to check the accuracy of his witness statement as lamentable, and stressed that a lawyer is not entitled to rely on their lay client for the accuracy of citation of authority or quotations that are contained in documents put before the court by the lawyer.

However, the court accepted that the solicitor had no idea that the citations were fake, and therefore had no intention to mislead the court. On that basis, the threshold for contempt had not been met.

The solicitor referred himself to the Solicitors Regulation Authority, and the court also decided to make its own referral.

 

Judicial guidance on AI

The courts have a variety of powers to ensure lawyers comply with their duties when using AI, including public admonition, imposing a costs order, striking out a case, referring lawyers to their regulators, initiating contempt proceedings, and referring lawyers to the police.

Dame Victoria Sharp sets out clear warnings on the use of AI:

  1. Large language models (such as ChatGPT) are not capable of conducting reliable legal research.
  1. AI tools can produce apparently coherent and plausible responses that turn out to be entirely incorrect.
  1. AI responses may make confident assertions that are simply untrue, and may cite sources that do not exist.
  1. AI responses may also purport to quote passages from a genuine source that do not appear in that source.

Concerned about the impact of these risks on the administration of justice, Dame Victoria Sharp set out the courts expectations for legal professionals:

  1. Those who use AI to conduct legal research have a professional duty to check its accuracy by reference to authoritative sources before using it in the course of their professional work.
  1. Practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services.
  1. Those measures must ensure that every individual currently providing legal services understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence.

The judiciary has also recently refreshed its internal guidance on using AI – the revised Guidance for Judicial Office Holders sets out some suggestions for minimising AI-related risks:

  1. Understand AI and its applications – ensure you have a basic understanding of their capabilities and potential limitations.
  1. Uphold confidentiality and privacy – do not enter information into AI that is private and confidential, as this may amount to publishing or disseminating the information.
  1. Ensure accountability and accuracy – information provided by AI must be checked for accuracy before it is relied on.
  1. Be aware of bias – AI tools inevitably reflect errors in the datasets they are trained on.
  1. Maintain security – use work devices (rather than personal devices) to access AI tools, and use your work email address.
  1. Take responsibility – be mindful that you are personally responsible for material that is produced in your name (eg when preparing witness statements).
  1. Be aware that others may be using AI tools – AI tools are now being used to produce fake material, and AI chatbots are being used by unrepresented litigants for advice – it may be appropriate to enquire about this.

In a speech on 2 July 2025, the most senior judge in England and Wales (Lady Chief Justice Sue Carr) described her horror at lawyers citing fake cases. While acknowledging that AI will be as beneficial as it is inevitable for lawyers and judges, she also called for careful oversight by legal services regulators and more training and support for lawyers, particularly trainees and those in the early years of their careers…to enable them to use AI circumspectly and usefully.

 

Forthcoming changes

A Civil Justice Council working group is set to examine whether further rules are needed to govern the use of AI in court proceedings. The working group will publish a consultation paper followed by a final report. Until then, the decisions in Ayinde and Al-Haroun set out the courts' expectations.

 

Guidance from Australia

In Australia, courts are increasingly engaging with the challenges and opportunities presented by generative AI in legal practice. In response to the growing prevalence of AI tools, courts across multiple jurisdictions have begun issuing formal guidance to ensure ethical and responsible use of these tools by lawyers.  

Last year, the Federal Circuit and Family Court of Australia heard the case of Dayal3 where a lawyer put false information before the court after having relied on cases and citations generated by AI. The judgment of Justice Humphreys sets out practical guidance on the use of AI for legal practitioners and highlights the importance of technology in efficient modern legal practice, while also noting the duties of practitioners in litigation not to mislead the court or another participant in the litigation process.  

In Youssef4, the Queensland Supreme Court dealt with the disclosure of AI use by the plaintiff, who told the court that his submissions were prepared with the assistance of AI tool ChatGPT. The court noted the Plaintiff vouched for the accuracy of his submissions, however, stated that this platform assisted in their organisational structure and added a flourish to his submissions and did not appear to take any issue with the usage of ChatGPT for this purpose, particularly since the Plaintiff disclosed the purpose of using the tool. 

 

Practice notes and guidance 

In Australia, the courts and law societies have begun publishing practice directions on the issue of using Gen AI in court proceedings. The Western Australian Courts are yet to issue a formal practice note on generative AI, having issued a public consultation note seeking input from the profession in February 2025. While the courts across Australia have adopted varying approaches, the Law Council is of the strong view that some form of guidance on the use of AI should be issued by the Court.

In New South Wales, the Supreme Court has taken a prohibitive stance on the use of AI in the legal profession. The New South Wales practice note prohibits solicitors from using AI to draft or prepare the content of an expert report, without prior leave of the Court. Conversely, the Supreme Court of Victoria has issued guidance advising solicitors to exercise caution when preparing affidavit materials, witness statements or expert reports. It emphasises that these documents should be finalised in a manner that reflects the witness/expert's own knowledge and words.

A central theme emerging from both court and law council guidance is the requirement for transparency and scrutiny. Practitioners are expected to disclose any use of generative AI in the preparation of materials filed with the court, and carefully scrutinise and verify for accuracy the work produced by AI tools. Further, when selecting which AI tools to use, practitioners are also to ensure the confidentiality and privacy of information is maintained given some AI tools remember the questions and information provided.   

These themes feed into, and are in some respects an extension of, the professional obligations of lawyers under the Legal Professional Uniform Law (Uniform Law), the Legal Professional Uniform Law Australian Solicitors Conduct Rules (ASCR) and the Legal Professional Uniform Conduct (Barristers) Rules 2015 (Barristers Rules), of which the following are most relevant: 

  1. Maintaining client confidentiality (ASCR r 9.1; BR r114).
  1. Providing independent advice (ASCR r 4.1.4; BR rr 3(b), 4(e) and 42).
  1. Being honest and delivering legal services competently and diligently (ASCR rr 4.1.2 and 4.1.3; BR rr 4(c)–(d), 8(a) and 35). 
  1. Charging costs that are fair, reasonable and proportionate (Uniform Law ss 172–173; ASCR r 12.2). 

 

Recommendations

AI can be an invaluable tool when used appropriately for tasks it can be trained to perform. Its ability to carry out more nuanced work will only continue to improve, but the risks of relying on AI for technical and professional applications must be properly understood. Any business that uses or interacts with AI in some capacity should therefore be aware of its limitations, and always check that AI-generated content is accurate before relying upon it professionally. Never insert confidential or commercially-sensitive information into any AI tool without appropriate safeguards in place, and beware of relying on content or communications that have the hallmarks of being AI-generated.

 


1[2025] EWHC 1383 (Admin)
2[2012] EWHC 3070 (Admin)
3Dayal
[2024] FedCFamC2F 1166
4Youssef v Eckersley & Anor [2024] QSC 33
Print