AI and outsourcing agreements
AI has made a significant impact globally, highlighted by the groundbreaking launch of ChatGPT, which captured widespread public attention. AI-based solutions have become prevalent across various sectors, revolutionizing industries such as finance, transportation and healthcare.
With an abundance of data now at their fingertips, companies are capitalizing on AI to automate processes, reduce repetitive tasks, facilitate decision-making, and gain a competitive edge on the market. The advantages of implementing AI technologies are evident, driving process optimization and cost reduction.
However, developing and implementing AI solutions is no easy feat. It demands specialized expertise, substantial infrastructure investments, and continuous updates and maintenance. Consequently, many companies are turning to outsourcing as a strategic solution for AI development and implementation.
Outsourcing offers a multitude of opportunities, enabling businesses to tap into the technical skills of expert suppliers, ensuring top-notch maintenance, support, and upgrades throughout the agreement. It also enables tailoring AI solutions to specific business needs while keeping a watchful eye on cost control. Nevertheless, as the realm of AI expands, so do the legal considerations. When drafting AI outsourcing agreements, companies must address crucial legal risks, particularly when internal functions and processes are outsourced.
Data management in outsourcing agreements
One of these concerns revolves around the use of data by the supplier for the AI solution development and enhancement. AI technologies, particularly machine learning, rely heavily on data to generate valuable insights. Therefore, it’s crucial to establish clear definitions within the AI outsourcing agreement regarding the supplier’s access to and use of the client’s data, including the purposes of solution development and improvement. In cases involving confidential information, the client should establish boundaries and require measures such as segregated environments to maintain confidentiality.
Moreover, the supplier must commit to adopting robust security measures to guarantee the integrity, security, and confidentiality of the customer’s data and information, especially when using cloud-based AI solutions. If personal data is processed as part of the AI outsourcing agreement, the parties should align their processing activities to the General Data Protection Regulation No. 679/2016 (GDPR). For instance, when transferring data outside the EU/EEA, the parties must verify the existence of appropriate safeguards, such as an adequacy decision by the European Commission or the execution of Standard Contract Clauses (SCCs), before any transfers occur.
Establishing AI liability framework
The issue of liability takes center stage in AI outsourcing negotiations. When discussing AI-based technologies, liability discussions become multifaceted. As with any contractual negotiation, the liability regime underscores the conflicting interests of the parties involved. The supplier will seek to limit liability, particularly when it arises from events beyond its control, such as the use of customer data and information.
However, liability takes on a new dimension in the realm of AI. It is widely recognized that AI-generated outputs, even in the case of weak AI (ie algorithms lacking selfdetermination and the ability to comprehend processed information), can have significant real-world consequences.
This raises questions of accountability for damages. In the absence of specific legislation, experts have proposed various hypotheses regarding AI-generated liability, assigning it alternately to the manufacturer, developers, or the owner/user of the AI solution. Clearly allocating liability in the outsourcing contract is vital to provide legal certainty between the parties.
Another critical topic in AI outsourcing agreements revolves around incident management, especially when the solution is based on cloud infrastructure. System failures or data loss, whether accidental or intentional (eg cyberattacks), can have far-reaching effects, impacting not only the affected systems but also the services provided to the end customers.
It’s important to note that data breaches also fall under the scope of the GDPR, which sets out specific communication obligations for data controllers vis-à-vis the relevant authorities and data subjects. If system failures or data loss affect an operator of essential services or a company within the national cybersecurity framework, the communication obligations outlined in personal data processing regulations may come up beside the additional obligations set forth in Legislative Decree No. 65/2018, which implemented the EU Directive 2016/1148 (NIS Directive), and Decree-Law No. 105/2019, enacted into law as Law No. 133/2019, which introduced the national cybersecurity framework.
Close partnership between the provider and the user of the AI solution is essential for ensuring compliance with such regulations. Therefore, the AI outsourcing agreement should include clear obligations of cooperation and assistance in the event of incidents that disrupt the AI solution or lead to data loss, while still allowing the customer to seek recourse against the supplier if the incident is the result of inadequate security measures.
Compliance with sector regulations
Compliance with industry sector regulations is crucial and should not be overlooked. European and national authorities have taken significant steps in recent years to regulate outsourcing in critical sectors such as banking and insurance. The European Banking Authority (EBA) and the European Insurance and Pension Authority (EIOPA) adopted guidelines on outsourcing which require to explicitly address specific topics in outsourcing agreements. One such relevant topic concerns the sub-outsourcing of critical and important functions by banking and insurance companies, which is permitted only under strict conditions, along with the information and audit rights of supervisory authorities.
The Regulation (EU) 2022/2554 on Digital Operational Resilience for the Financial Sector (DORA) has recently come into effect. This new regulation mandates that contracts between financial entities, including insurance and reinsurance companies, and IT service providers, regardless of whether they qualify as outsourcing, must envisage certain provisions. For example, suppliers should commit to post-termination obligations to improve the parties’ ability to manage IT risks in the financial sector. Furthermore, financial entities are now required to conduct periodic testing, including threat-led penetration testing, to assess incident management preparedness, identify vulnerabilities and deficiencies in digital operational resilience, and promptly implement necessary corrective measures. Fulfilling these regulatory obligations requires the introduction of ad hoc provisions within the outsourcing agreements to ensure effective cooperation with technology service providers.