Add a bookmark to get started

3 October 202315 minute read

IA Meets AI – Rise of the Machines

Written by: Lucia Bizikova, Philip Hancock, Dan Jewell, Ilan Sherr

 

This article was originally published in Daily Jus on 2 October 2023 and is reproduced with permission from the publisher.

 

Introduction

Two US lawyers and their law firm were recently fined USD5,000 for relying on fictitious cases. Those cases had been created by the generative artificial intelligence (AI) tool ChatGPT. The court found that, while there was nothing “inherently improper” about using AI as a research tool, the lawyers, and their firm “abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations”. 

It is increasingly acknowledged that AI will bring about fundamental changes in all areas of life, including dispute resolution. AI has the ability to review and analyse large amounts of data and automate repetitive and laborious tasks, which can be used effectively by law firms. Hence, it has the potential to significantly decrease lawyer time spent on such work, reducing costs for clients and freeing lawyers to focus their time on areas requiring greater legal and strategic expertise. 

The impact for lawyers and clients alike could be profound. In this article we discuss what that impact might look like in practice, with a particular focus on international arbitration.

 

Beneficial Uses of AI in Arbitration Proceedings

AI-powered tools are already, and increasingly, used at virtually all stages of international arbitration proceedings, and its potential for transformation is already evident. Existing applications include:

  • E-discovery and document review

Document review can be one of the most expensive aspects of dispute resolution, and the lion’s share of documents reviewed are invariably irrelevant. E-discovery and document review platforms, such as RelativityOne and Rtrieveal use AI tools to address those issues. By using “continuous active learning” technology, intuitive AI, and large language models (LLMs), these platforms can significantly streamline the review and disclosure process, and, crucially, improve accuracy of review. This is achieved by the AI relying on samples of documents reviewed by lawyers and self-training to detect the key issues, whilst constantly re-ranking unreviewed documents based on further lawyer input.

Recently developed neural-net deep learning AI models now also enable the pre-training of AI prior to commencement of a review exercise. This is particularly helpful when reviewing for issues that arise repeatedly within large datasets, but which would otherwise take significant time for legal teams to find. For example, by using DLA Piper’s Aiscension AI modules that are pre-trained to spot cartels, review teams are often able to set aside over 99% of reviewable documents from each set prior to second-level human review. The costs savings which result from that application of AI can be significant.

  • Legal research

AI-powered platforms, such as Westlaw Edge, Co-Counsel and Lexis+ AI, are intended to analyse legal text and quickly provide relevant guidance, precedents, cases, and statutes. In addition, existing generative AI tools embedded in these platforms can find and check case citations and the judicial treatment they have received.

As more advanced search engines and text generation tools are developed, lawyers might be able to automate further aspects of legal research, including answering more complex legal questions and drafting legal notes. 

However, lawyers have a responsibility to ensure that any such AI-generated content has been fact-checked, and is based on accurate and reliable sources – the role of the lawyer is (at least at this stage) supplemented, but not replaced by AI.  

  • Language processing and drafting

Generative AI is already being used to produce essay-like text, and it is likely to be only a matter of time before it becomes widely used in legal drafting. It is increasingly being used to convert transcripts or notes of witness interviews into draft witness statements. Editing automatically drafted documents is much quicker than generating them from scratch, which could help reduce the costs of the evidence gathering stage of the proceedings. 

In June 2023, Jus Mundi launched a Beta version of “Jus-AI”, a GPT-powered tool specifically tailored for arbitration and State-State disputes. It draws from Jus Mundi’s global legal database of over 80,000 documents in international law and arbitration, and is aimed at providing concise summaries from arbitral awards and court judgments, enabling lawyers to focus on the application of that information rather than spending time sourcing and summarising it.

  • Transcription and translation services

Transcription of hearings is becoming increasingly automated and reliant on voice-recognition and speech-conversion technologies. It is one of the drivers of the resurgence of the Legal Tech support industry, which includes companies like Opus 2 and TrialView. Similarly, translations from foreign languages are regularly carried out by machines, with human translators being responsible for reviewing auto-translated text. Similar tools are already in place and used frequently in applications like Microsoft Teams, which can generate transcripts of virtual meetings. 

This is a useful tool in the context of international arbitration, where meetings are often virtual and involve participants from all over the world. It has particular value in the process of preparing witness statements where lawyers can check certain points that arose during witness interviews. However, care must be taken in relation to legal privilege, as some tribunals (and courts) in common law jurisdictions may view transcripts as not being lawyer work product and therefore may need to be disclosed.

  • Selecting arbitrators

Arbitrator Intelligence is a tool that provides insight into international arbitrators. It analyses arbitrators’ track record on key aspects of their past cases, such as rulings on document production, the duration of the proceedings, the arbitrator’s questions during hearings, and their reasoning in the final award. It then generates reports based on the collated data, allowing parties (in return for a fee) to make more informed decisions on arbitrator nominations/appointments to suit their case. Of course, given that most commercial arbitrations are confidential, the dataset used by Arbitrator Intelligence is not comprehensive. This serves to illustrate both that AI is often only as effective as the data to which it has access, and also the importance of understanding what the underlying dataset is.

Similarly, Jus Mundi’s Conflict Checker analyses a broad database of arbitrators, counsel, experts and tribunal secretaries, and identifies existing and past relationships among them. Its main objective is to identify and prevent potential conflicts of interest from arising.

  • Predictive analytics

Predictive analytics is the use of AI to consider precedents and decisions to assess the likely outcome of a case. This may be especially useful in the context of arbitrator appointments in circumstances where software can analyse how each potential nominee is likely to determine the present case based on prior decisions. 

Platforms like LexisNexis’ Lex Machina and Context are also used to predict the behaviour of judges, opposing counsel and other parties, enabling parties to anticipate potential challenges in cases before they arise. That said, because commercial arbitrations are generally confidential, the available data is more limited, which in turn will likely limit the utility of these types of tools in an arbitration context. Nevertheless, we still expect to see their use rise in the coming years, particularly if there are more initiatives to publish a greater number of commercial arbitration awards and related materials (with or without redaction/anonymisation), and perhaps in the context of investment treaty arbitration where there is already more transparency in the arbitral process and outcomes.

 

SVAMC Guidelines on the use of artificial intelligence in arbitration

On 31 August 2023, the Silicon Valley Arbitration and Mediation Center (SVAMC) published draft Guidelines on the use of artificial intelligence in arbitration (AI Guidelines) and invited members of the global arbitration community to participate in a consultation process and provide comments. As a consequence, we might be expecting some structure or substantial changes in the near future. 

According to SVAMC, the AI Guidelines are intended to assist participants in arbitration to navigate the potential applications of AI.

In its current state, the AI Guidelines are divided into three chapters, which are respectively addressed to: (i) all participants in arbitration (Chapter 1); (ii) parties and party representatives (Chapter 2); and (iii) arbitrators (Chapter 3). They contain proposed guidelines on: (1) the uses, limitations and risks of AI applications; (2) safeguarding confidentiality; (3) disclosure; (4) duty of competence and diligence in the use of AI; (5) respect for integrity of the proceedings and evidence; (6) non-delegation of decision-making responsibilities; and (7) respect for due process. They also contain suggested practical examples of compliant and non-compliant uses of AI in arbitrations which are intended to serve as a “yardstick” to measure conformity in real-world scenarios. Finally, they propose a model clause for inclusion in procedural orders for parties who agree to opt-in.

The AI Guidelines are further evidence that AI has a growing part to play in all manner of arbitration, and a useful reference point for parties, party representatives and arbitrators alike as to how to safely leverage the capabilities of AI in arbitration. 

 

AI arbitrators?

As illustrated above, AI is already an active part of modern arbitration. However, the concept of arbitrators using AI to deliver their decisions (in contrast to using AI tools to introduce efficiencies in their own work), or the outright appointment of “machine” arbitrators powered by AI is one which meets considerable resistance. As AI technology develops, that resistance may diminish, but will it ever be overcome? 

Given AI’s ability to analyse large amounts of data, it has been suggested that AI-powered decision-making is likely to be more accurate than human decision-making, and less subject to bias. However, AI lacks emotional intelligence, and may be oblivious to nuance, which could hinder its ability to deliver ‘fair’ or ‘merits based’ outcomes. In this sense, there might be more of a risk of strictly ‘correct’ legal outcomes, but which are not necessarily the ‘right’ outcome for the parties. Of course, this issue is itself the subject of some debate, e.g., to what extent arbitrators should favour a black letter law approach to determining legal issues.

In addition, there is a risk that decisions issued by AI-arbitrators might be unenforceable in certain jurisdictions. Even though the 1958 New York Convention and the 1966 ICSID Convention do not expressly state that arbitrators must be natural persons, some national arbitration laws and rules do require tribunal members to possess certain qualities that can only be attributed to human beings, such as legal training and experience (although query whether an AI tool which has absorbed every case and piece of legislation in a legal system is “trained” or “experienced”). 

Also, in the context of domestic arbitration, some national arbitration laws include an express requirement that arbitrators must be natural persons. This is the case, for example, under Article 1450 of the revised French Code of Civil Procedure according to which “the mission of arbitrator [in domestic arbitration] may only be entrusted to a natural person”. This, and similar provisions in other arbitration laws, may give rise to some rather technical arguments as to whether an AI can be an “arbitrator” capable of rendering an enforceable award.

Further, the enforceability of decisions rendered by AI-arbitrators might be challenged on public policy grounds, as some jurisdictions might not be comfortable with the idea of delegating decision-making authority to machines (which may also be contrary to the parties’ agreement). Questions also remain over the extent to which AI is able to provide adequately reasoned decisions. It has been suggested that AI programs do not have the ability to provide “reasons” per se for their decisions (as opposed to merely citing the relevant input upon which the decision is based), and that they are equally unable to shift between different methods of processing information that they have been programmed with (such as arithmetic reasoning, decision tree reasoning, analogical reasoning) and select the ‘appropriate’ approach for any given case. Finally, it is uncertain whether an AI-arbitrator would be able to ensure due process, and how AI would be able to respond to aggressive or abusive conduct during proceedings. To address this risk, SVAMC’s AI Guidelines propose that “[a]n arbitrator shall not delegate any part of their personal mandate to any AI tool. This principle shall particularly apply to the arbitrator’s decision-making function.”

And yet, automated decision-making is not without precedent. For example, the online shopping platform eBay has been operating an automated resolution centre for several years, purportedly settling up to 90% of all claims without any human input. In addition, DoNotPay.com, an AI-powered legal assistant originally created to challenge traffic fines (self-styled as “the World’s First Robot Lawyer”) has been used on various consumer rights issues, including lowering medical bills, cancelling subscriptions, disputing credit reports, and closing bank accounts. Some experimental platforms like The Virtual Courthouse, Net-ARB, eCourt and ThoughtRiver are also attempting to incorporate AI in decision-making. Indeed, Sir Geoffrey Vos, Master of the Rolls of England and Wales, has let it be known that he considers that the future of English dispute resolution will involve an element of AI. 

Nonetheless, disputes resolved in international arbitration can be complex and high-value, not least due to their commercial nature and international character. Given the procedural difficulties and risk of facing challenges to AI-decided awards, it is likely that AI arbitrators will remain, at least for the time being, legal fiction. However, parties should ‘watch this space’, because as AI systems become more developed and robust, and their use becomes more commonplace, this may change.

 

Challenges and Risks

Despite the benefits outlined above, AI adoption in international arbitration poses some significant risks. 

Proper data privacy and security protections will be required to avoid data theft and exposure of confidential information, not least because of the potential transfer of data to third-party providers. AI systems will need to be regularly checked to ensure that they are not saving confidential or privileged information in a way that could be reverse engineered and expose clients and their commercially sensitive information. Given that confidentiality is often a major factor in choosing arbitration, legal teams will need to ensure that effective safeguards are in place so that all data made available to AI tools is protected. 

In addition, whilst properly functioning AI boasts increased accuracy, improperly functioning AI is prone to making errors. AI is only as “intelligent” as its programming. Text generative AI tools have repeatedly been found to invent false data that sounds correct rather than actually being correct. Better guardrails on these types of AI to ensure factual accuracy will be critical if they are to be relied on for legal proceedings (see, for example, the recent practice direction on the use of AI in the preparation of materials filed with the court issued by the Court of King’s Bench of Manitoba in Canada.

AI is also not free from bias. It is dependent on the quality and objective reliability of the input data and has been found to further discrimination in some instances. AI tools used for selecting arbitrators are a risky area in this regard and will require human oversight and review. 

The use of AI currently remains largely unregulated, and it will be necessary for governments to implement safeguards and measures that ensure that it can be used reliably and fairly. In the meantime, lawyers will need to be aware of the risks, take a cautious approach to using unknown or new AI tools, and make sure that their clients’ data is kept confidential and protected against unauthorised use. SVAMC’s AI Guidelines, for instance, propose that:

All participants involved in arbitration proceedings who use AI tools in preparation for or during an arbitration are responsible for familiarising themselves with the AI tool’s intended uses and should adapt their use accordingly. All participants using AI tools in connection with an arbitration should make reasonable efforts to understand each AI tool’s relevant limitations, biases, and risks and, to the extent possible, mitigate them.

In light of the swift advancements and integration of AI within legal services, it will be crucial for lawyers to adapt and evolve in order to maintain their competitive edge in the constantly shifting market. Lawyers may soon need to become “legal technologists”. 

 

Conclusion

While it is (thankfully!) safe to say that recent reports that AI will replace all lawyers immediately are greatly exaggerated, it is likely to revolutionise legal (and other) services across most sectors, driving costs down, and increasing efficiency. 

For this to happen, lawyers and other legal professionals must embrace the change by learning new skills, developing methods and tools to implement AI in their daily work effectively, understanding how it operates, and implementing robust safeguards and control systems. The latter in particular is important to ensure that client confidentiality is maintained, and sensitive information is not used improperly. Moreover, when using generative AI in legal submissions and drafting, it is imperative that all information is fact-checked and based on accurate and reliable sources.

The structure of this article was prepared with the assistance of AI, but the content was then crafted and developed by lawyers.

Should you wish to discuss ways in which DLA Piper uses AI in our work or would like any further information, please get in touch. To access DLA Piper’s comprehensive report on AI entitled “AI Governance: Balancing policy, compliance and commercial value”, click here.

Print