Abstract_Lights_P_0152

15 December 2025

Innovation Law Insights

15 December 2025
Legal Break

Getty Images v Stability AI – Would the EU have decided differently?

In this episode of Legal Break, we analyse one of the most talked-about cases in the AI world: Getty Images v Stability AI. We explore three key questions: Why did the UK court rule the way it did? What does it tell us about training AI models on copyrighted images? And would an EU court have reached a different outcome? Watch the episode here.

 

Artificial Intelligence

AI and preventing insurance fraud from automatic identification to compliance

AI is perhaps the most advanced tool in the fight against insurance fraud. Machine learning and generative AI can analyse large volumes of heterogeneous data (claims, social media, open data) and identify anomalous patterns that could indicate attempted fraud.

But AI is also being used to trick consumers. One example is the “ghost broker” scam. Websites and chatbots, using generative AI, simulate real agencies and produce fake policies, managing to deceive even the most experienced users.

AI-based anti-fraud systems can cross-reference historical, behavioural and biometric data, detecting anomalies in real time, such as multiple requests from different individuals with identical or similar data, false documents generated by AI or serial claims. AI also helps automate cross-checking between public and private databases, drastically reducing identification times and the risk of human error.

The most advanced applications in the field of anti-fraud include:

  • Computer vision techniques to compare images of claims and validate the authenticity of the damage detected.
  • Behavioural biometrics to detect inconsistencies in user interaction patterns.
  • NLP (Natural Language Processing) to analyse the content of claims and identify linguistic inconsistencies that could potentially indicate fraud.
  • Automatic cross-checking of medical records, testimonies and satellite data to validate the veracity of catastrophic events.

The role of the AI Act and the institutional response

The EU has adopted Regulation (EU) 2024/1689, known as the AI Act, to address the challenges posed by the rapid evolution of AI. The AI Act represents the cornerstone of AI regulation and governance in the internal market. The primary objective is to protect safety and fundamental rights through a risk-based approach. But the AI Act also has significant implications for the insurance sector.

Some AI systems, like those for biometric recognition, credit scoring or emotion recognition, are classified as “high risk” and have to comply with strict requirements in terms of risk assessment, transparency, traceability and human oversight. AI systems used to detect financial fraud don’t automatically fall into this category, but they could if they're integrated with other features or systems already considered high risk. This paves the way for closer scrutiny of both the tools used by insurance companies and those misused by fraudsters.

The AI Act also introduces specific rules for systems that generate synthetic content like deepfakes or realistic imitations of people, which are increasingly common in ghost broker scams and identity theft. Suppliers and users of these systems will have to ensure transparency, affix digital watermarks and provide adequate documentation, increasing the traceability of any illegal uses. Member states and competent authorities now also have to monitor high-risk systems and coordinate at European level.

These developments are an opportunity for national authorities like IVASS to take a proactive role, not only in supervising the sector, but also in preventing and suppressing AI abuse in the insurance industry. The Insurance Supervisory Authority’s reports pay close attention to attempts at fraud against policyholders.

Fraud attempts especially affect the motor vehicle liability insurance sector. It's an extremely large market, divided between many operators, many of whom operate online. But fraud attempts could affect all sectors of the insurance business, particularly those with a high incidence of serial fraud.

The Insurance Supervisory Authority hasn't yet expressed its opinion on the use of AI in systems designed to identify fraudulent situations. But it is considering using AI for more routine activities, such as managing consumers' complaints.

Using AI skilfully and appropriately could significantly improve insurance products in general. It could encourage companies to offer services that users want. For example, assistance policies, where resources freed up in the serial and documentary management of contractual positions could be used for personal services.

This is a long process that requires an approach different from the world of services. But it seems industry regulations are already setting the path to follow by imposing a balanced cost/benefit ratio for insurance product customers.

Author: Giacomo Lusardi

 

Data Protection and Cybersecurity

Italian DPA issues opinion on ANAC whistleblowing guidelines

On 27 November, the Italian Data Protection Authority (Italian DPA) issued a favourable opinion on the new whistleblowing guidelines – specifically regarding internal reporting channels – issued by the Italian Anticorruption Authority (ANAC), and on ANAC’s resolution to amend and supplement the guidelines for external reporting channels.

The measures covered by the opinion introduce important clarifications to properly interpret Legislative Decree 24/2023 (Whistleblowing Decree), especially to ensure confidentiality regarding the identity of whistleblowers and the content of their reports. Here are the key points in the Authority’s opinion:

  • Obligation to Adopt Secure Reporting Channels – Entities subject to the Whistleblowing Decree must implement reporting channels that guarantee the confidentiality of the whistleblower’s identity, the identities of involved parties, and the content of the report. The use of IT platforms is recommended, as these typically allow for robust security measures, including data encryption at rest. Whistleblowers should be encouraged to use only the specifically established channels, although confidentiality must be ensured in any case. Notably, regular or certified email isn't considered sufficient on its own to guarantee whistleblower confidentiality unless accompanied by specific, well-justified countermeasures.
  • Data Protection Impact Assessment – The responsibility for conducting a data protection impact assessment under Article 35 of the GDPR lies with the recipients of the reports, not with the platform providers. However, providers must support this process, particularly by supplying documentation about the technological solution implemented.
  • Handling Irrelevant Reports – Even when a report doesn’t meet the criteria to be considered relevant under the Whistleblowing Decree, the confidentiality of the whistleblower must still be maintained, as whistleblowers have a legitimate expectation of privacy and protection.
  • Retention of Reports – Reports and related documentation must be deleted within five years from the date the whistleblower is informed of the outcome of the procedure, unless retention is necessary for managing proceedings arising from the report (eg disciplinary actions).
  • Training – Individuals responsible for managing reports must receive specific training, including on personal data protection.
  • Intragroup Relationships – In groups where the parent company manages the reporting channel, the parent company must be considered the data processor under Article 28 of the GDPR.

The Italian DPA’s opinion reinforces the crucial role of personal data protection in applying and interpreting the Whistleblowing Decree, which imposes numerous compliance requirements regarding privacy regulations (eg data protection impact assessment, instructions on data protection, specific training for those managing reports, data processing agreements). These obligations are part of a complex regulatory framework that demands constant attention to personal data protection and the privacy of individuals involved.

Author: Cristina Criscuoli

 

Intellectual Property

Getty Images v Stability AI: What one of the most anticipated judgments says about the relationship between IP and Generative AI

The High Court of Justice of England and Wales has issued its long-awaited judgment in Getty Images v Stability AI, the first – and so far the most significant – UK decision addressing intellectual property (IP) issues related to generative AI.

The ruling provides useful clarification and guidance on secondary copyright infringement, but – given that Getty withdrew certain claims during the proceedings – it doesn’t address the more important questions concerning primary copyright infringement, particularly the use of third-party materials to train AI models.

The background

The proceedings pitted Getty Images, a global leader in stock photography, against Stability AI, the developer of the Stable Diffusion model. Getty had initially brought numerous claims, including:

  • Copyright infringement, split under English law into two typical strands:
    • Primary infringement: a direct breach of the copyright owner’s exclusive rights, occurring when a person, without authorisation, performs one of the acts reserved to the copyright holder;
    • Secondary infringement: an indirect form of infringement requiring knowledge (actual or reasonable grounds for knowing) that one is facilitating a primary infringement.
  • Database right infringement.
  • Trademark infringement.
  • Passing off: an English tort protecting a business’s goodwill against misrepresentation leading the public to believe that one party’s goods or services are those of another.

The claims that were abandoned

During the proceedings, Getty abandoned several claims, mainly due to difficulties in producing sufficient and adequate evidence. These claims were withdrawn:

  • AI training-related claim: Getty alleged that, in developing and training Stable Diffusion, Stability had scraped millions of copyright-protected images from the Getty Images and iStock websites, copied and stored them, and used them as training data. However, the claimants faced considerable difficulties in proving that any part of the training had taken place in the UK. In the absence of such evidence – and given that copyright has no extraterritorial effect – Getty discontinued this claim.
  • Claim relating to AI-generated outputs (primary copyright infringement): Getty argued that the generation of images similar to its own amounted to direct infringement by Stability. Although Getty identified certain prompts capable of producing similar images, Stability blocked those prompts and prevented the reproduction of the disputed images before trial. As a result, the claim was dropped because the practical remedy Getty would have obtained had it succeeded was already achieved.
  • Database rights claim: Getty maintained that its database was protected and that Stability had extracted and/or reused substantial parts of that database during training. As this claim was closely tied to the training-related claim, once evidence of training within the UK was lacking, no extraction or reutilisation in the jurisdiction could be sustained.

Trademark infringement

Getty argued that Stable Diffusion was capable of producing outputs reproducing the registered trademarks “GETTY IMAGES” and “ISTOCK,” particularly through the appearance of the “Getty Images” watermark. The High Court examined in great detail a series of issues, including:

  • whether outputs containing the Getty/iStock watermark were generated to a significant, rather than marginal, extent;
  • whether such outputs constituted use in the course of trade by Stability;
  • whether there was identity or similarity between the watermark and Getty’s registered marks;
  • whether there existed actual confusion, or at least a likelihood of confusion, among the public;
  • whether the phenomenon caused dilution of the distinctive character of Getty’s marks, reputational harm to the company, or an unfair advantage for Stability.

The court concluded that Getty hadn’t discharged its burden of proving the frequency with which the watermarks appeared, deeming the evidence insufficient to establish “use in the course of trade” attributable to the defendant. In other words, the judge accepted that the model had, in limited and specific cases, reproduced elements identical or similar to Getty’s marks at a level capable of amounting to infringement; but this didn’t demonstrate systemic or widespread violations.

The court also observed that the average consumer is able to understand that outputs generated by Stable Diffusion originate from an AI system and not from photographs produced or authorised by Getty.

Finally, the claim based on the reputation of the marks was dismissed, and the passing-off claim was considered unnecessary, as it added nothing beyond the assessments already made.

Copyright infringement (secondary infringement)

With respect to the secondary copyright infringement claim, Getty argued that Stability had imported into the UK an “infringing article” (under section 22: importing an infringing copy of the UK Copyright, Designs and Patents Act), and/or had possessed, distributed or made available such an article in the UK (section 23: possessing or dealing with an infringing copy). The “article” in question was the pre-trained Stable Diffusion model.

If upheld, this claim would have prevented Stability from introducing, distributing or using Stable Diffusion in the UK.

On this point, the court accepted Getty’s contention that the term “article” may also refer to software, including AI models such as Stable Diffusion. But it held that the Stable Diffusion model is not, and does not contain, an infringing copy of Getty’s works.

During training, the model’s weights – the numerical values determining how it interprets and transforms information when generating images – are modified by exposure to datasets including Getty works. But these weights don’t constitute reproductions of the works; they don’t contain files, images or recognisable parts of Getty photographs, nor do they store temporary or permanent copies of them. Any temporary reproduction of images during training was deemed irrelevant to this claim.

Legal and practical considerations

The judgment is highly detailed and devotes significant space to complex technical issues. The case highlights the difficulties in proving IP infringements in the context of generative AI, particularly when training activities occur outside the national jurisdiction. In this setting, the burden of proof proved especially onerous for the claimants, significantly limiting the scope of their claims and affecting the overall outcome.

Nonetheless, the High Court’s decision stands as an important precedent for the sector. A parallel proceeding in the US is ongoing and may provide further developments on these issues.

Author: Lara Mastrangelo

 

Legal Tech

Generalist v Specific: Understanding which LegalTech tools to buy

The legal technology market is undergoing a fascinating evolution. As the sector matures and moves from pilot projects to production deployments, two distinct categories of tools are emerging, each serving different purposes and requiring different expertise levels. Understanding this spectrum is essential for legal teams making strategic technology investments.

The Swiss army knife: Versatile horizontal solutions

Consider the Swiss army knife: a remarkably useful tool that combines multiple functions in a single package. It can cut, open bottles, tighten screws, and perform dozens of other tasks competently. For most everyday situations, it’s more than adequate. The same applies to horizontal LegalTech solutions currently dominating the market.

These platforms are designed to serve a broad range of legal tasks across practice areas. They excel at supporting drafting activities, providing document analysis capabilities, enabling side-by-side comparison during due diligence reviews, and offering quick access to source materials when verifying citations in legal opinions. Their strength lies in versatility: a single platform can assist with contract review in the morning and legal research in the afternoon.

The proliferation of these generalist solutions is partly driven by market dynamics. Investors backing LegalTech ventures expect scalability and returns on investment. A tool that addresses multiple use cases across different practice areas can reach a larger market faster than a highly specialised solution. This economic reality has shaped the current landscape, producing platforms that aim to be helpful across the entire spectrum of legal work.

According to industry data, approximately 70% of LegalTech investment remains concentrated in Anglo-centric markets, with most funding flowing toward platforms offering broad horizontal capabilities. These solutions have become the default starting point for many legal departments exploring AI adoption, precisely because they require minimal configuration and can demonstrate value across multiple workflows relatively quickly.

The scalpel: Precision vertical solutions

No surgeon would perform an operation with a Swiss army knife. When precision matters and the stakes are high, professionals reach for the scalpel: a tool designed for one purpose, executed with extraordinary precision.

A parallel market is emerging in LegalTech: hyper-vertical solutions that don’t attempt to solve a hundred problems, but instead solve one problem exceptionally well. These tools are built by specialists for specialists, often founded by lawyers who spent years practicing in specific domains and understood precisely what was missing from existing workflows.

Vertical LegalTech platforms are designed around the actual work patterns of practitioners in specific fields. A competition law AI platform understands merger control filing requirements across jurisdictions. A real estate technology solution integrates with land registry systems and knows property transaction workflows intimately. An intellectual property tool can navigate patent classification systems with the precision that generalist platforms simply cannot match.

The key differentiator is domain depth. Vertical solutions are trained on specialised datasets, integrate with industry-specific systems, and are designed around workflows that require genuine expertise to understand. They don’t just process legal text: they understand the context, the regulatory requirements, and the professional standards that govern specific practice areas.

The expert hands factor

Here lies a crucial distinction: while horizontal tools are designed for broad accessibility, vertical solutions often require expert hands to unlock their full potential. A scalpel in untrained hands is merely a sharp blade. In the hands of a surgeon, it becomes an instrument of precision.

Vertical LegalTech tools assume domain knowledge. They speak the language of their specific practice area, use terminology that practitioners understand, and produce outputs calibrated to professional expectations. This isn’t a limitation but a feature: by assuming expertise, these tools can operate at a higher level of sophistication and deliver results that generalist platforms can’t replicate.

The trade-off is clear. Horizontal solutions lower the barrier to entry and can be deployed broadly with minimal training. Vertical solutions demand more from their users but reward that expertise with superior precision and workflow integration.

Strategic implications for legal teams

Neither category is inherently superior. The choice depends on workflow requirements, risk tolerance, and the nature of the work being performed.

For general productivity enhancement, communication management, and routine document analysis/editing, horizontal solutions offer immediate value with minimal friction. They’re ideal for tasks where good enough is sufficient and where the cost of imprecision is low.

For specialised practice areas where accuracy is paramount, regulatory compliance is complex, and professional standards demand precision, vertical solutions provide capabilities that horizontal tools can’t match. The additional investment in learning specialised platforms pays dividends through higher accuracy, better workflow integration, and outputs that require less professional review and correction.

Many organisations are finding success with hybrid approaches: deploying horizontal platforms for general productivity while investing in vertical solutions for their core practice specialties. This portfolio approach acknowledges that different tools serve different purposes and that attempting to force a single solution to handle all use cases often results in suboptimal outcomes across the board.

Looking ahead

As LegalTech matures, both horizontal platforms and vertical solutions are becoming better. Legal professionals should choose technology based on their specific needs: general tools for broad routine tasks, specialised ones for precision work. Success depends on knowing when to use each tool.

Author: Tommaso Ricci

 


Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo BardelliCarolina BattistellaNoemi CanovaGabriele Cattaneo, Giovanni Chieco, Maria Rita CormaciCamila CrisciCristina CriscuoliTamara D’AngeliChiara D’OnofrioFederico Maria Di Vizio, Enila EleziNadia FeolaLaura GastaldiVincenzo GiuffréNicola LandolfiGiacomo LusardiValentina MazzaLara MastrangeloMaria Chiara Meneghetti, Giulio Napolitano, Andrea Pantaleo, Deborah ParacchiniMaria Vittoria PessinaMarianna Riedo, Tommaso Ricci, Marianna Riedo, Rebecca Rossi, Dorina Simaku, Roxana SmeriaMassimiliano TiberioFederico Toscani, Giulia Zappaterra.

Articles concerning Telecommunications are curated by Massimo D’Andrea, , Matilde Losa and Arianna Porretti.

For further information on the topics covered, please contact the partners Giulio CoraggioMarco de MorpurgoGualtiero DragottiAlessandro FerrariRoberto ValentiElena VareseAlessandro Boso CarettaGinevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer,” the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA)

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani

Print