Add a bookmark to get started

6 October 202111 minute read

Man vs Machine: Legal liability in Artificial Intelligence contracts and the challenges that can arise

Introduction

In our first article in this series, we looked at how the courts internationally have approached some of the contractual and legal issues that have already cropped up around artificial intelligence. With the continued rise of artificial intelligence, which is increasingly being used in a wide and varied range of products and services across all sectors and industries, more and more legal issues are going to arise (and indeed have already begun to manifest across the legal and AI landscape).

In this article we look at some of the key legal and contractual risk points that businesses using, or supplying, AI need to consider.

We’ll explore this topic in more detail at our webinar with 4 Pump Court on 20 October at 2pm - 3pm. Click here for more information and to register.

When AI goes wrong - contractual remedies flowing from defective AI

Contractual liability at common law is rooted in the principle that the loss must not be so remote as to be unrecoverable (in other words, was the loss within the contemplation of the parties).

As with most contracts for the sale of products, any contract for the supply or provision of AI is likely to contain supplier- or developer-favoured allocations of risk. Businesses supplying or providing AI are likely to try to protect themselves from potential liability by including a provision excluding liability for defective AI. The effectiveness of such clauses has not yet been tested, and so the task will eventually fall to the courts to assess whether such an exclusion clause is reasonable1 (likely involving public policy considerations on the allocation of risk in AI). The absence of case law makes it difficult to predict how a court would strike this balance, and this is a significant area of risk for suppliers looking to rely on an exclusion clause (or indeed purchasers hoping to overcome such a clause to recover loss and damage).

Therefore another likely avenue for a potential claimant who has suffered loss as a result of defective AI would be the argument that there is an implied term as to quality and/or fitness of the AI, and that (in England and Wales) this has not been met under either the Sale of Goods Act 1979 (SGA 1979) or the Consumer Rights Act 2015 (CRA 2015).

However, these arguments are not without their own difficulties. Crucially, claims under both the SGA 1979 and CRA 2015 would be predicated on the classification of AI as a “good”, which is in itself contentious. For instance, in the context of computer software, the English courts have held that intangible computer software (to be distinguished from computer hardware) does not constitute a “good” for the purposes of the SGA 19792. Therefore, there is an uncertainty over whether the intangible code underpinning an AI process would be similarly categorised.

AI and tortious liability

In cases where there are barriers to relying on contractual liability, a potential claimant will usually look to the law of torts to try to bridge this gap. In particular, negligence is often viewed as an opportunity to impose liability on a party outside the reach of the contract.

The law on negligence is rooted in the principle of foreseeability of the loss, and proving a chain of causation between the loss and the party being sued. While each case will be determined on its facts, the particular features of AI pose immediate challenges in establishing a claim under the law of negligence. As noted earlier, there may be significant problems for a claimant establishing (on the balance of probabilities) foreseeability or a causal nexus between the conduct of the programmers/developers/suppliers and an outcome caused by aspects of AI which have developed by machine learning long after the AI was initially developed.

This uncertainty will no doubt operate to the benefit of suppliers and developers in the short term, but there is a significant risk that:

  1. The courts will look to adapt the principles of tort to fit the new paradigms created by AI - tort law has traditionally been the way the courts have addressed changes in society, and there is no reason to believe that AI will be an exception to this.
  2.  

  3. The existing product liability regime will likely come into play. The principles of product liability depend on negligent design, negligent manufacture and breach of a duty to warn, and its scope extends to parties involved in manufacture, sale and distribution of a product. The first challenge here will be the point noted above – whether AI in fact counts as a product or a service. Even if that challenge was surmounted, we would expect that difficulties in determining exactly where the defect occurred in the supply chain will come to bear if the outcome complained of has stemmed from a feature of autonomous machine learning.
  4.  

  5. Any perceived lacuna in tortious (or indeed contractual) liability will generate the legal or political will to introduce regulations to forge liability where none may have existed at common law. The European Commission’s Comparative Law Study on Civil Liability for Artificial Intelligence3 notes that:

“The procedural and substantive hurdles along the way of proving causation coupled with the difficulties of identifying the proper yardstick to assess the human conduct complained of as faulty may make it very hard for victims of an AI system to obtain compensation in tort law as it stands.”

We expect that the measures which would most likely be introduced to modify the existing liability regime would include:

a. Introducing a strict liability regime to cover situations where remoteness or causation might otherwise prove a barrier to recovering against a supplier or developer;

b. An adapted duty of care, e.g. obligations on a supplier of AI systems to monitor and maintain those systems to control for unexpected outcomes due to machine learning;

c. Express allocation of liability on manufacturers for damage caused by defects in the products (even if those defects resulted from changes to the AI product after it had left the manufacturers control), or a strict liability regime on all manufacturers or producers;

d. Joint and several liability between manufacturers, developers, suppliers and retailers; and

e. Reversing the burden of proof, requiring the manufacturer or supplier to prove that the AI product was not the cause of the harm.

Indeed, it is worth noting that many of the above measures have already been recommended by the European Commission as a means of mitigating the legal challenges facing would-be complainants in relation to AI technology.

The UK has already taken steps to address some of the uncertainties around AI by introducing the Automated and Electric Vehicles Act 2018 (which attributes liability to the insurer where damage has been caused by an automated vehicle driving itself). We can expect that the law of torts will continue to be shaped by legislative and regulatory reform, not to mention the more immediate prospects of legal developments in the courts as they develop the existing common law principles by dealing with novel cases on a day to day basis.

Responsible adult or unthinking machine? – contractual legal personality and AI

One of the key features of artificial intelligence (and one of the most troubling from a legal perspective) is the machine learning feature – the capacity for the product to gather data and use it to develop and make new decisions which it has not been explicitly programmed to do.

Legal liability requires a party to be responsible for the outcomes of its actions. How does this work where those outcomes were caused by an aspect of machine learning that was not necessarily foreseeable to the AI programmers, developers, providers or purchasers?

There has been a significant legal debate on whether liability in AI matters could be settled by granting AI its own legal personality. While this might seem fanciful at first glance, it is arguably no more novel than the 19th century decision to ascribe legal personhood to companies and corporate entities.

There is certainly a practical appeal in granting legal personality to AI. Firstly, it may fill the conceptual lacuna of what happens where an AI process causes damage by malfunctioning in an abnormal and unforeseen manner. It would allow fault to be allocated to the true source of the damaging act (the AI) instead of imposing it upon actors who, in reality, could not have anticipated the damage.

Equally, it could be argued that if AI systems demonstrate a process of rationality, through being able to make independent decisions, then the AI should be held liable if it falls short of the parties’ reasonable expectations in conducting that process.

Nonetheless, there are limitations to these theoretical justifications. One key limitation is that, while it may exercise rational processes, it is unclear whether AI can be considered to have full legal capacity. Given that AI functions and processes occur within the confines of what its underlying code permits, AI is arguably not “free-thinking” in the sense that it could not, for example, form an intention to create legal relations.

Perhaps most importantly, granting legal responsibility to AI may not be in the interests of the potential claimant; it could limit their options for recourse by taking liability outside of the existing parties. For example, can AI provide adequate compensation to any potential claimant?

Significantly, there appears to be very little political appetite for recognising AI as having legal personality. In three recent resolutions4, the European Parliament rejected the proposal to grant legal personality for AI, stating that any legal changes should “start with the clarification that AI systems have neither legal personality or human conscience5. Instead, it has suggested a two-staged liability by which: (1) operators of “high risk” AI (AI having significant potential to cause damage), would be strictly liable for the AI’s damage6; and (2) operators of any other AI would be liable on a fault-based assessment7.

These comments are not fatal to arguments in favour of AI legal personality, however; rather they are additions to the long-standing debate on the ethics of personhood in the context of AI.

AI’s big issue - tackling bias in AI systems

While it is undoubtedly the case that AI offers potentially transformative opportunities, businesses must also take into account exposure to new and possibly unique risks. In particular, every company that takes its social responsibilities seriously will need to be aware of the risks posed by AI in relation to bias and discrimination.

It has long been recognised that AI systems and their processes can reinforce elements of bias present in its underlying datasets. Indeed, a 2016 investigation by ProPublica revealed that a number of US states used an algorithm to assist with making bail decisions that was twice as likely to falsely label black prisoners as being at high-risk of re-offending than white prisoners. The European Parliament has also recognised that AI “has the potential to create and reinforce bias8.

Now more than ever, businesses recognise the need to reflect inclusivity in their branding and to promote social responsibility. In this context, these findings bring into sharp focus the additional factors that need to be taken into account when developing and using AI systems.

As with all risks, there are methods that businesses can employ to mitigate any potential reputational harm. This may involve increasing human input in the AI process or making use of open-source bias mitigation tools. Ultimately, it will be for individual companies to decide whether these methods render AI a commercial risk worth taking.

Conclusion

In our final article, we will look at some of the practical measures which businesses can adopt to safeguard their position in relation to some of these AI issues when entering into contracts or a contractual supply chain.

Mitigation of the risks relating to AI requires early engagement with experienced lawyers who understand the cultural, legal and regulatory landscapes. To discuss the impact of AI on your business and how we can help, please do speak to any of the authors, or your DLA Piper contact.

If you are interested in learning more about AI projects and the consequences for your contracts, register for our seminar at 2pm - 3pm (UK) on 20 October 2021, in collaboration with 4 Pump Court. If you can’t join live, you can register in advance to receive the recording. Click here for more information and to register.


1 Pursuant to s.11 Unfair Contract Terms Act 1977
2 See St Albans City and District Council v Intl Computers [1996] 4 All ER 481 and Computer Associates UK Ltd v Software Incubator Ltd [2018] EWCA Civ 518
3 Luxembourg: Publications Office of the European Union, 2021
4 Resolution 2020/2012(INL) on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and related Technologies; Resolution 2020/2014(INL) on a Civil Liability Regime for Artificial Intelligence; Resolution 2020/2015(INI) on Intellectual Property Rights for the development of Artificial Intelligence Technologies
5 Civil Liability Resolution, Annex, (6)
6 Art.4(1)
7 Art.8
8 AI Ethical Aspects Resolution, Art.27
Print