Add a bookmark to get started

Abstract_Connecting_Wires_P_0045
10 October 20235 minute read

Liability for damages caused by AI

The unstoppable development of digital technologies and the exponential spread of AI systems pose delicate issues in the area of civil liability, for two main reasons.

First, because the Italian general civil liability rules date back to the 1942 Civil Code, and, second, because the applicable rule may depend on multiple factors: status of the injured party, type of product/service in which the AI system is incorporated, type of damage caused.

The potential uncertainties in the event of litigation have prompted the EU to intervene with legislative proposals which are still under discussion. Before analyzing the proposals, we will give an overview of the protection and remedies that the current system makes available to persons harmed by an AI system.

 

The current legal framework, between contractual and tortious liability

In the AI environment, damages are often the consequence of the breach of a certain obligation between the damaging party and the injured party. In such a case, the latter will be entitled to exercise the remedies available per the specific contract in place, which will mostly be a contract of sale or services, an employment contract, a contract in the insurance or banking sector, or a contract for financial

intermediation. Where there is no contractual relationship, the injured party may invoke tortious liability.

If AI systems can be traced back to products, any defects will fall under the product liability rules set forth in Directive 85/374/EC, now transposed in Italy into the Consumer Code. However, this protection is mostly addressed to consumers and only allows for the compensation for damage for death, personal injury, and destruction or deterioration of goods other than the defective product.

Other legal provisions that may be invoked in Italy in case of damages caused by AI systems are Article 2050 of the Italian Civil Code (performance dangerous activities), Article 2051 of the Italian Civil Code (damages caused by things in custody), Article 2054 of the Civil Code (circulation of vehicles), and Article 2043 of the Civil Code (general tort liability).

 

The proposed Directive on AI and tortious liability

To avoid fragmentation resulting from inconsistent legislative measures across Member States and to reduce legal uncertainty fo businesses developing or using AI in the internal market, the European Commission announced on 28 September 2022, the adoption of two proposals aimed at adapting tort rules to the digital age, the circular economy, and the impact of global value chains.

One of the two proposals aims to revise the existing product liability law. The other consists of a new directive on liability for Artificial Intelligence (AI Directive), which is aimed at facilitating compensation for damage for those who have suffered damages resulting from the use of AI systems.

In the Commission’s view, existing national liability laws, particularly for fault, are not suitable for regulating liability actions for damages caused by AI-based products and services. The main limitations of these regulations could be inherent in the characteristics of AI, including its complexity, autonomy and opacity (the “black box” effect), which could make it difficult or unduly burdensome for the injured parties to identify the responsible subjects and prove to have met the requirements of tortious liability. In addition, the AI supply chain involves several players, making the attribution of liability even more complex.

The AI Directive is part of a coordinated European approach to address the rise of AI and digital technologies: the AI Act proposed by the Commission in April 2021, focuses primarily on monitoring and preventing damages, while the AI Directive aims at harmonizing the liability regime where AI systems cause damage. In summary, the AI Directive aims to ease the injured party’s burden of proof through two main tools: rebuttable presumption and disclosure of evidence.

Rebuttable presumption is intended to make it easier for injured parties to prove the causal link between the fault of the defendant-injurer and the output produced by the AI system, or the failure of the AI system to produce the output that gave rise to the damage. The AI Directive does not, therefore, go so far as to provide for a shift of the burden of proof on the defendant (eg suppliers or manufacturers of the AI system), as this is considered too burdensome (and may in fact stifle innovation and adoption of AI-based products and services).

In addition to the rebuttable presumption, with respect to high-risk AI systems, the AI Directive gives national courts the power to order the disclosure of evidence by the provider or another person bound by its obligations where the provider refused to comply with the same request made by the injured party-claimant (or any other person entitled to do so). In addition, the claimant may request the preservation of the evidence.

 

The AI Directive: Next steps

Five years after the AI Directive was transposed in the Member States, the Commission will submit a report to Parliament, the Council, and the Economic and Social Committee, in which it will assess the achievement of its intended objectives.

In the same context, the Commission will also consider whether it is appropriate to provide for a strict liability regime for damages caused by specific AI systems and for compulsory insurance coverage.

Print