
2 August 2024 • 23 minute read
Innovation Law Insights
2 August 2024Artificial Intelligence
AI Pact’s Draft Commitments Published Anticipating AI Compliance
The AI Act has been published, and in anticipation of its full applicability, the AI Office has launched the AI Pact, encouraging organisations to proactively adopt key provisions of the AI Act.
This initiative aims to ensure responsible AI usage and mitigate risks to health, safety, and fundamental rights.
What is the AI Pact?
The AI Pact is a voluntary commitment for organisations to start aligning with the AI Act’s regulations before they become mandatory. By participating in the AI Pact, organisations can lead by example, demonstrating their dedication to ethical AI practices and preparing for the upcoming regulatory landscape. The Pact outlines several core and additional commitments that organisations can adopt based on their role in the AI ecosystem.
The AI Office has now published the draft commitments, and thanks Elinor Wahal for sharing them. Below is an analysis of their contents.
Core commitments for participating organisations
Organisations joining the AI Pact agree to implement three primary commitments:
- Adopt an AI Governance Strategy
- Map High-Risk AI Systems
- Promote AI Literacy
Additional commitments for a broader impact
While the core commitments lay the foundation, organisations are also encouraged to strive for additional goals based on their specific roles in the AI value chain. These commitments vary for AI providers and AI deployers:
For AI Providers:
- Risk Identification
- Data Quality Policies
- Traceability
- User Information
- Human Oversight
- Risk Mitigation
- Transparency in AI Interaction
- Content Marking
For AI Deployers:
- Risk Mapping
- Human Oversight in Deployment
- Content Labelling
- User Notification
- Workplace Transparency
The path forward: Transparency and accountability
Organisations participating in the AI Pact are setting a standard for transparency and accountability in AI usage. By publicly sharing their commitments and reporting on their progress, these organisations not only demonstrate their dedication to ethical AI practices but also build trust with consumers, stakeholders, and regulators.
The AI Pact offers a unique opportunity for organisations to lead in the responsible adoption of AI. By anticipating and implementing the key provisions of the AI Act, participating organisations can mitigate risks, foster innovation, and ensure that AI’s benefits are realised ethically and sustainably.
This is a crucial milestone in a process where companies want to adopt AI systems and are willing to minimise risks of potential challenges and generate trust towards their customers and employees to exploit AI at its best.
Author: Giulio Coraggio
The insurability of AI risk: A broker's perspective
There is an increasing discussion about AI risk and how companies can obtain insurance coverage to protect themselves in this sector. To investigate this issue, we spoke with a leading insurance broker. Alessandra Corsi and Rossella Bollini from Marsh provided their perspective on the nuances of AI risk coverage and the evolving role of insurance in mitigating AI-related liabilities.
1. From the broker's perspective, how is the insurance market responding to the coverage of AI risk?
The insurance market, especially since the GenAI explosion, has started to monitor the rise of new risks related to the development and use of artificial intelligence solutions, both to anticipate the demands of insureds and to start to efficiently manage exposure across existing portfolios. At the time of writing, the insurance market is still in an “observatory” stage whereby, other than for a very few cases, specific ad hoc AI solutions are not yet available. Based on Marsh global perspective, in the US and in selected European countries there is more attention around the topic: insureds do perceive the challenge that AI solutions bring along, wondering how to transfer their AI residual risk to the insurance market and pushing insurers to deliver answers and propose solutions. So far, the Italian market – both from a supply and demand angle – has not developed any meaningful initiatives; we expect this to change in the near future with carriers looking at finding value-added solutions for their clients.
2. In the described market context, how is AI risk exposure transferred? Is it possible to rely on traditional products?
Currently, there is only one ad hoc insurance product for AI risk, distributed by a leading player in the reinsurance market. Beyond this, clients looking for coverage can explore other established product lines such as Cyber, Professional Indemnity, Crime, Intellectual Property and Product Liability where typically claims and/or circumstances related to AI are not yet specifically excluded. Indeed, cover seems to be afforded on a “silent” basis: not affirmatively covered and not explicitly excluded. To give a few examples, if training data and input data can be captured by the model and leaked in the model outputs causing a data breach, the cyber policy could cover it; again, if a fraud is conducted using deepfake, the crime policy could cover it as well. Aiming to curb the level of uncertainty, AI affirmative endorsements on cyber and crime policies are very slowly being released but, at this point in time, this does not constitute the norm.
3. What risks do you think are potentially insurable with an AI policy?
Insurability is a complex topic, as it depends on the exposure, on the business conducted and on the insured's risk appetite. Depending on the situation, one could decide to cover first party damages – insuring the performance of self-built AI – or potential third-party liability profiles, either contractual or non-contractual. Depending on the business conducted by the insured, it might be relevant to cover risks from hallucination and false information, privacy infringement, intellectual property violations or unfair or biased output.
It goes without saying that a certain degree of tailoring is required to shape a product that fits the insured's needs.
4. Are traditional underwriting methods still relevant and applicable in the AI world?
They do remain relevant, but in a partial way. Let's make the comparison with cyber risk underwriting process. Although it's a complex and nuanced risk, the insurance market has settled on the use of questionnaires, sometimes combined with perimetral scanning or risk dialogues: as of now, it's a linear path. For AI risk, it may not be as straightforward. In order to quantify the risk, it will be necessary to identify the underwriting information on a case-by-case basis (deployer, user, type of AI involved) to be the evaluated together with data on model training and post-deployment controls. The topic of quantifying damage in the event of a claim is also very complex: consider the case of an AI product provided to banks to recognise legitimate transactions from frauds. In this case, the provider would want to buy a policy to cover situations of underperformance of the product. To prevent the difficulty in quantifying the loss, it may be necessary to set a threshold eg guarantee that the tool model will catch at least 99% of all fraudulent transactions and if the AI fails to deliver as promised, the insurance company will pay.
5. Have you experienced the notification of any claims under AI policies or related to damages caused by AI? If yes, which type of claims?
As Marsh, most of the claims we've seen involving the use of GenAI are in the domain of fraud. As known, this refers to fraudulent transfer of funds obtained by creating the false belief in employees that they are complying with legitimate requests from internal parties within the company. As of now, claims that fall in this category are generally notified under crime policies. GenAI is also used to refine phishing attacks (currently, one of the main vectors of ransomware), making them more credible and increasing the success rate.
6. What are your predictions for the near future?
The path will likely be the same as experienced for cyber risk: eventually, insurers will need to quantify and monitor AI exposure within traditional insurance policies to the extent that it could represent a significant unexpected risk to their portfolios. To do so, the reinsurance markets and Lloyds of London might start imposing AI exclusions on cyber, professional indemnity, crime and other traditional products, creating a gap that will need to be filled. By that time, we expect AI-specific insurance products to be ready to perform, supported by a defined and replicable underwriting process and a consistently predictable loss quantification mechanism.
Authors: Giacomo Lusardi, Karin Tayel
Data Protection and Cybersecurity
Encryption: National Cybersecurity Agency updates guidelines
On 22 July 2024 the National Cybersecurity Agency (ACN) updated its guidelines on cryptographic functions, which focus on data confidentiality and quantum threat preparedness techniques.
As the ACN specifies, cryptography allows communications in the digital world to be protected securely and efficiently, and the updated documents make it easier to navigate the specifications of cryptographic algorithms and to better understand how ciphers work and interact with messages to be sent. Among the guidelines already published in December 2023 were guidance on preserving passwords, guidance on cryptographic hash functions – critical to cybersecurity, as their properties allow for verification of the alteration of a piece of data or message – as well as message authentication codes, which allow for ensuring the integrity of a message and verifying the identity of the sender.
The guidelines specify how cryptography arose to protect the confidentiality of data from ancient times, especially in the military, ensuring that secret communications were not intercepted by enemies. Originally based on alphabetic ciphers, it evolved to binary encryption in the 20th century, becoming essential with the spread of digital systems. Today, cryptography is crucial for protecting sensitive data on the web, in online financial transactions, and in supporting technologies such as blockchain and digital identities.
Cryptography is a field that, evolving from its practical beginnings, has been enriched with theoretical foundations. It consists of the encryption of messages through the use of cryptographic keys to obtain an encrypted message, making the decryption reversible. Algorithms can be deterministic or probabilistic. The move from symmetric encryption, with a shared key, to asymmetric encryption, with public and private keys, solved the problem of secure key distribution. The security of a cipher is evaluated by its resistance to attacks, such as brute force attempts. With the advent of quantum computers, new challenges and solutions have emerged, such as post-quantum cryptography and quantum cryptography based on the principles of quantum physics.
The guidelines also cover the topic of cryptanalysis, as the process of analyzing a cryptosystem to find possible weaknesses in its structure that would allow better than brute force attacks to be designed, undermining its security. Cryptanalysis of a cipher isn't only used for malicious purposes but is the basis of the validation process of a cryptographic system.
The document from ACN serves as an introduction for the “Cryptographic Functions Guidelines” series produced by the Technological Scrutiny and Cryptography Division of the Certification and Oversight Service by can. The aim is to raise the level of awareness of digital service producers and providers for cybersecurity aspects, incentivizing the use of secure and modern cryptographic solutions capable of foiling risks such as identity and money theft or, in general, compromised data security.
Author: Alessandra Faranda
Intellectual Property
Patenting AI: An important decision of the UK Court of Appeal in the Emotional Perception case
The intersections between AI and intellectual property rights have long been the focus of numerous debates, and the rapid development of technology has often presented interpreters with scenarios not expressly contemplated by lawmakers.
This is the case, for example, regarding the possibility of patenting AI systems.
On the one hand, most jurisdictions provide that programs for computer and mathematical models can only be patented if they involve a technical contribution that's new and inventive compared to the state of the art. On the other hand, the possibility of patenting computer programs and mathematical methods as such is generally excluded (see, eg Article 52 of the European Patent Convention). But admitting the patentability of AI systems could to some extent encourage investment in innovation and the sharing of knowledge.
On 19 July 2024, the UK Court of Appeal issued an important decision on the subject (Comptroller-General of Patents v Emotional Perception AI Ltd).The dispute concerned a patent application claiming a system based on an artificial neural network (ANN) providing media file recommendations to users. For instance, in the context of music, the technology would make it possible to offer users tracks classified according to the emotions they generate, regardless of the musical genre they belong to.
The application was first rejected by the UK Intellectual Property Office (UKIPO), but the decision was later overturned by the High Court. Hence the appeal brought by the UKIPO.
To assess the applicability of the rules excluding patentability, the Court of Appeal first questioned the notion of a program for a computer. According to the judges, this is a “set of instructions for a computer to do something," a computer being any “machine that processes information.”
That being clarified, the second question to be answered was: can an AI system such as the one claimed be treated in the same way as a program for computer?
The answer was affirmative: an artificial neural network, including the weights on which it is based, is still a set of instructions for a computer to do something.
In this respect, disregarding the patentee's arguments, the court held that no relevance could be attributed to the peculiarities of an ANN system compared to more traditional computer programs. These include the lesser role of human being in defining instructions, the solution by the ANN of problems that would be difficult for a human programmer to solve, the fact that it's not a program based on “if-then” logic, or the fact that the machine learns by itself by processing a certain amount of information.
The reasoning then focused on whether or not there was a technical effect necessary for the program to qualify as a patentable invention. Considering that the system was essentially presented the user with improved file recommendations, the requirement was excluded, and the first instance decision was overturned.
Albeit very briefly, the court finally pointed out that, even if an artificial neural networks system did not qualify as a computer program, it could be qualified as a mathematical method. The assessment, therefore, would be similar.
In conclusion, what can be learned from the decision is that the patentability of an AI system based on an artificial neural network is not excluded per se, but it rather depends on whether the claimed invention involves a technical contribution. Failing that, the rules excluding the patentability of computer programs and mathematical methods as such apply.
The decision, among the first to rule on the patentability of inventions involving AI systems, is consistent with the approach taken so far by the European Patent Office. At this stage, it's not known whether the dispute will continue before the Supreme Court. What is certain is that the ruling will represent an important precedent in the jurisprudential landscape, even beyond the borders of the UK.
Author: Massimiliano Tiberio
Elvis Act: Generative AI in copyright and advertising law
On 21 March 2024, Tennessee enacted the Ensuring Likeness and Image Security (Elvis Act), a pioneering law that entered into force on 1 July 2024. The legislation aims to protect songwriters, artists, and music industry professionals from the potential dangers of unauthorized use of their voices and images by generative AI.
The Elvis Act (named after music icon Elvis Presley) represents a significant achievement in copyright protection, but its most innovative aspect lies in its extension to advertising law, including the unlawful use of artists' voices. With the increasing use of generative AI in the US and worldwide to create realistic voices and images, individual artists' reputations, images, and commercial value are often threatened. This law aims to counter these threats by regulating the use of generative AI and ensuring that artists maintain control over their own identities.
This measure aims to protect artists from potential issues, particularly economic ones, related to the computerized plagiarism of their voice or image, which is now within reach of any common application or program. With the enactment of the Elvis Act, artists will have specific legislation to seek compensation for damages resulting from the unauthorized use of their identity, including vocal clones and realistic images generated without consent.
This new law is an important step towards adapting laws to the era of new technologies, which today not only serve as creative tools but also facilitate the dissemination of so-called digital replicas. The Elvis Act comes at a time when there are numerous legal disputes against AI developers for improperly using copyrighted content.
A significant example is the case of actress Scarlett Johansson, who, in 2023, filed a lawsuit against an app that created an advertisement using her image and voice without authorisation. Similarly, the heirs of comedian George Carlin sued the creators of the podcast Dudesy for using Carlin's voice in a YouTube video, violating copyright and publicity laws. Another relevant case is Young v NeoCortext Inc, where participants of the reality show Big Brother started a class action lawsuit against a software developer for the unauthorised use of their images.
The explosion of these disputes and other dangers have necessitated the creation of specific regulations to protect the copyright and publicity rights of artists endangered in the era of generative AI. The Elvis Act, a significant response to this need, represents one of the most recent developments in this area.
With AI rapidly evolving, every jurisdiction must continue adapting to protect artists and their rights in an increasingly digital world. This is precisely what is happening lately: in Europe, the much-anticipated AI Act has also been published in the Official Journal, marking a significant step forward in this direction.
Author: Rebecca Rossi
Gaming and Gambling
New Italian Remote Gambling rules notified to the European Commission
The technical rules for the new Italian remote gambling licenses have been notified to the European Commission, which is one of the last milestones before the launch of the tender for new licenses in Italy.
What is the notification to the European Commission?
EU law provides notification of legislative projects regarding products and information society services to the European Commission so that EU Member States and other interested parties can raise comments for a period of three months, the “standstill” period.
In this context, Italy notified the European Commission, “The draft technical rules contain technical specifications that define the performance and functions as well as the technical requirements that the concessionaire must ensure for the operation and remote collection of public games,” which are the technical rules setting out the regime applicable to new remote gambling licenses to be issued by the end of 2024.
What is provided by the new rules on Italian remote gambling licenses?
The regime applicable to remote gambling licenses will not be changed entirely. Remote gambling platforms will continue to exchange real-time messages with the servers of the technical provider of the Italian gambling authority according to the communications protocols, which are predetermined messages.
The notified rules set the regime for technical compliance verification, specify the minimum technical requirements, and cover the operator’s IT structure and the necessary characteristics of the information system. This system includes several components:
- the gaming system for providing services
- the system for presenting the gaming offer (website and app)
- the gaming accounts system
- the accounting system for determining due amounts
- the monitoring and control system for the hardware and software infrastructure
- the telematic connection network for information transport
It’s mandated that the resources needed for the operator’s system infrastructure must reside within the European Economic Area, even if cloud computing solutions are used.
Specific provisions ensure maximum capacity, availability, scalability, performance, security, and data confidentiality guarantees. Measures are also in place for the Italian gambling authority’s supervisory and control actions. Particular emphasis is placed on mechanisms to prevent gambling addiction, such as self-limitations, self-exclusions, and blocking features.
The rules also address the regime applicable to operators who provide services for other concessionaires. In such cases, the gaming systems must be physically or logically separated according to the game type, ensuring the data for each dealer can be isolated.
What is going to happen next?
The standstill period will end on 18 October 2024, so the tender for new Italian gambling licenses cannot be launched before that date. Once this milestone is passed, unless changes need to be implemented to the rules, ADM, the Italian gaming authority, will launch a tender for new licenses whose main terms should be the following:
- All the Italian remote gambling licenses will expire by the year’s end and cannot be renewed. It is possible to apply for a new license that is likely to be merged with the existing license already held by operators currently in the market to ensure the continuity of operations. There is no maximum limit on the number of new licenses, but each group cannot hold more than five licenses, and there will be a limited timeframe to apply for the license.
- The price for each nine-year license will be EUR7 million per license, to which an annual license fee equal to 3% of the GGR net of gambling taxes will be added. Also, operators have to invest an amount equal to 0.2 of their GGR net of gambling taxes in responsible gaming campaigns.
- Very stringent requirements are provided to meet the suitability criteria, and potential penalties are also increased as a deterrent for operators.
- Substantial limitations are introduced to skins / white label sites since any license can be linked only to one website with an Italian top-level domain name, and the operator’s logo must be shown on the homepage of the site. No provisions have been introduced to allow additional skins against the payment of a potential fee.
- Shops selling top up cards to transfer funds to gaming accounts (PVRs) will be recorded in a specific registry and pay an annual fee of EUR100 per shop. Also, any gambling activity or withdrawal of funds from the shops could be limited.
- More relevant rules have been introduced against offering games in Italy through unlicensed gambling websites and also through implementing payment-blocking measures.
Given the high price of new licenses, we see considerable M&A transactions since larger operators are acquiring smaller operators and merging their players’ databases.
Author: Giulio Coraggio
Italian court upholds sanction for violating Gambling Advertising Ban on affiliate agreements
In a recent landmark ruling, the Italian Regional Administrative Court (TAR) of the Lazio Region upheld a significant sanction imposed by the Italian Communications Authority (AgCom) concerning the ban of gambling advertising in Italy. This ruling Decision No. 13241/2024, carries profound implications for future gambling deals in Italy.
Background of the case
The case revolves around Vincitù S.r.l., an online gambling license holder, which was fined EUR388,453.93 by AgCom (Resolution No. 121/24/CONS) for violating the Italian Gambling Advertising Ban. The ban, codified in Article 9 of the Italian Law Decree No. 87/2018 (known as the Dignity Decree), prohibits all forms of advertising, sponsorship, and promotional communications related to gambling with cash winnings.
AgCom’s investigation, prompted by a police inspection, unearthed multiple promotional and affiliation agreements at Vincitù's headquarters. Specifically, AgCom identified 30 promotional contracts termed “agreements for the promotion of remote public games on behalf of the concessionaire Vincitù S.r.l. ” and 20 affiliation contracts.
Key findings and implications
Among the promotional contracts, two were identified as constituting advertising communications, particularly those with Top Ads, which promoted Vincitù's gambling sites via content creator Spike Slot on social media. The remaining 28 contracts didn't produce evidence of illegal promotional content. However, within the affiliation contracts, it was found that many, except five with intermediary companies, violated the Italian Gambling Advertising Ban. These violations included tracking links used to identify new players from affiliate websites, which facilitated the calculation of commissions based on player deposits and bets.
The TAR’s decision to uphold AgCom’s sanction confirmed several critical points:
- Vincitù's Violation: The court validated AgCom’s findings that Vincitù's promotional activities breached the Italian Gambling Advertising Ban.
- Broad Scope of the Ban: The ruling reinforced the extensive reach of the Italian Gambling Advertising Ban, emphasising its application to all forms of promotional content.
- Platform Responsibilities: The decision highlighted the obligation of major platforms to implement robust organisational systems, including automated tools and AI, to ensure compliance with the ban.
Business transactions and compliance
Gambling operators' affiliation agreements, both involving gambling-related content, will need careful structuring to comply with the Italian Gambling Advertising Ban. The ruling underscores the necessity for Italian gambling operators to:
- Review contracts: Ensure all promotional and affiliate agreements adhere to the legal framework established by the Dignity Decree.
- Implement monitoring systems: Develop and deploy advanced monitoring and compliance systems to prevent any inadvertent violations.
- Content regulation: Carefully curate content that informs on the gambling products and services to ensure it doesn't contravene the advertising ban.
Conclusion
The TAR's decision to uphold AgCom's sanction against Vincitù marks a significant precedent in the enforcement of the Italian Gambling Advertising Ban. The ruling serves as a crucial reminder of the importance of stringent compliance with local advertising laws, also from a formal standpoint considering that agreements that produce effects to Italian audience have to comply with the limitations set forth by the Italian gambling advertising guidelines.
Author: Vincenzo Giuffrè
Innovation Law Insights is compiled by the professionals at the law firm DLA Piper under the coordination of Arianna Angilletta, Matteo Antonelli, Edoardo Bardelli, Carolina Battistella, Carlotta Busani, Giorgia Carneri, Maria Rita Cormaci, Camila Crisci, Cristina Criscuoli, Tamara D’Angeli, Chiara D’Onofrio, Federico Maria Di Vizio, Enila Elezi, Alessandra Faranda, Nadia Feola, Laura Gastaldi, Vincenzo Giuffré, Nicola Landolfi, Giacomo Lusardi, Valentina Mazza, Lara Mastrangelo, Maria Chiara Meneghetti, Deborah Paracchini, Maria Vittoria Pessina, Tommaso Ricci, Miriam Romeo, Rebecca Rossi, Roxana Smeria, Massimiliano Tiberio, Giulia Zappaterra.
Articles concerning Telecommunications are curated by Massimo D’Andrea, Flaminia Perna e Matilde Losa.
For further information on the topics covered, please contact the partners Giulio Coraggio, Marco de Morpurgo, Gualtiero Dragotti, Alessandro Ferrari, Roberto Valenti, Elena Varese, Alessandro Boso Caretta, Ginevra Righini.
If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.