
19 January 2026
Innovation Law Insights
19 JanuaryReport
New edition of DLA Pipers' Gambling Laws of the World
The new edition of DLA Piper's Gambling Laws of the World guide is now available. Covering almost 50 jurisdictions, the guide looks into topics on gambling compliance that every business needs to address in its operations. The guide is available here.
Legal Break
Pay-or-Okay and its privacy law implications: What the new NOYB study really tells us
In this episode of Legal Break, Giulio Coraggio of DLA Piper discusses the pay-or-okay model and whether it truly offers consumers a genuine and free choice, in light of the recent study on Pay-or-Okay published by noyb.eu. Watch the episode of Legal Break here.
Privacy and cybersecurity
CNIL issues recommendations on AI compliance for suppliers
With the Recommendations published on 5 January 2026 (the Recommendations), the French data protection authority, Commission Nationale de l'Informatique et des Libertés (CNIL) published an operational methodology aimed at enabling suppliers of AI models to assess and document whether a model, or a system incorporating it, falls within the scope of Regulation (EU) 2016/679 (GDPR).
The CNIL’s contribution doesn’t seek to redefine the categories of personal data protection law conceptually, but deliberately moves within already known coordinates, applying them to a technological vector that’s currently testing their systemic resilience.
The starting point for the analysis is deliberately sober. CNIL takes as its reference point the classic criterion, now well established in European case law and practice, according to which processing falls within the scope of the GDPR when it concerns data relating to identified or identifiable natural persons, taking into account the means reasonably likely to be used for identification.
The Recommendations clarify that an AI model can only be considered excluded from the scope of the GDPR if the probability of re-identification of the persons whose data were used in the training phase can be qualified as insignificant. In this sense, CNIL's position is explicitly in line with the guidance expressed by the European Data Protection Board (EDPB). In Opinion 28/2024 the EDPB clarified that the classification of an AI model as anonymous can never be presumed in the abstract, but must be the result of a case-by-case assessment. This assessment must take into account not only the intrinsic characteristics of the model, but also the concrete methods of access, use and interaction, and the state of the art of data extraction techniques.
AI model and system: Methodological premises
One of the most significant and least obvious steps in the methodology proposed by CNIL concerns the conceptual and functional distinction between an AI model and an AI system. This distinction isn’t merely terminological but has a direct impact on the logical order of the analysis required for GDPR applicability.
In CNIL Recommendations:
- The AI model is understood as a statistical representation of the characteristics of the dataset used for training. As such, it may incorporate information that’s sufficiently granular to allow the direct or indirect reconstruction of personal data relating to individuals in the training set. The legally relevant risk therefore lies at this level: it’s in the model that a storage or inference capacity might be hidden that makes re-identification possible.
- The AI system, on the other hand, represents the application and organisational level that governs the use of the model. Interfaces, access controls, query methods, output filters, functional limits, security measures and organisational safeguards all contribute to defining how, and to what extent, the model is actually accessible and exploitable. It’s the system that determines the operating context, but it does not, in itself, eliminate the intrinsic risk that may be embedded in the model.
The legal consequence of this distinction is central to CNIL's approach: the analysis of the applicability of the GDPR has to start from the model. Only after ascertaining that the model cannot be considered anonymous does it become relevant to question whether integration into a system equipped with robust measures reduces the probability of re-identification to the point of rendering it insignificant. In other words, the Authority implicitly excludes a “reverse” approach, according to which the absence of perceptible risk at the system level would render the nature of the underlying model irrelevant. Risk must be assessed at the point closest to its possible origin, ie where information relating to natural persons is stored, even if only potentially. The system can mitigate or neutralise this risk, but it can’t retroactively erase it or render the analysis of the model superfluous.
CNIL also clarifies that the status assessment applies to any model trained on personal data, regardless of its stated purpose or the functions for which it was designed. It’s not decisive that the model is intended to produce information about specific individuals; what matters is the technical possibility, even accidental, of extracting or inferring personal data by reasonably usable means. In other words, the purpose of the model doesn’t operate as a legal exemption criterion. This approach is fully in line with the EDPB's guidance, according to which a model designed to produce or infer information about natural persons necessarily contains personal data, while a model trained on personal data but not designed for such purposes can only be considered anonymous if the risk of direct or indirect identification is highly unlikely, both through parameter analysis and through queries.
This has a significant systemic effect on the entire AI supply chain. If the model is considered anonymous, a system that uses it exclusively will, in principle, be excluded from the scope of the GDPR. If, on the other hand, the model isn’t anonymous, any exclusion of the system will require an independent and additional analysis, based on appropriate measures and tests to demonstrate that the risk of re-identification has been effectively neutralised. In the absence of such demonstration, the processing inevitably extends to downstream actors as well.
From the dataset to the model, risk analysis as an accountability obligation
One of the most significant elements of the methodology outlined by CNIL is the unambiguous statement that the analysis of the status of an AI model isn’t an option left to the technical discretion of the provider, but a genuine legal obligation, directly attributable to the principle of accountability enshrined in the GDPR. Whenever a model is trained on personal data, the re-identification risk analysis must be conducted systematically, regardless of the outcome it might produce.
The underlying logic is fully consistent with the framework of European data protection law. It’s not necessary for the processing to have immediate effects on the data subjects for an assessment obligation to arise. It’s sufficient that the processing incorporates a legally relevant risk, ie the non-negligible possibility that natural persons could be identified, directly or indirectly, from the model. The analysis required by CNIL doesn’t serve to confirm a given assumption of anonymity, but to verify whether that assumption can be supported in the light of objective and verifiable criteria.
In this context, the role of the model provider takes on clear centrality. It’s the provider who, in most cases, determines the purposes and means of development processing: they select the training dataset, define the model architecture, establish the training methods and envisage the contexts of use. So the provider has the primary responsibility for conducting the analysis and drawing a reasoned conclusion about the applicability or otherwise of the GDPR.
- When the provider concludes that the model should be excluded from the scope of application of the GDPR, this conclusion cannot be confined to an internal or purely declarative assessment. It must be supported by documentation suitable for submission to the supervisory authority for review, allowing the logical and technical path followed to be reconstructed. It’s not only the outcome of the analysis that’s relevant, but also its traceability: the measures taken to reduce the risk of data storage, the assessments made on the state of the art, and any tests conducted to verify the model's resistance to re-identification attacks. In this sense, the documentation takes on an eminently probative function. It becomes the tool through which the provider demonstrates that it has fulfilled its accountability obligation, making its position defensible in the event of verification or dispute.
- CNIL extends this logic to the system level. If a model cannot be considered anonymous, it’s not impossible that its use within a larger system could reduce the risk to the point of making it insignificant. However, even in this case, the analytical burden doesn’t disappear but is shifted. The system provider has to conduct its own independent analysis, taking into account not only the characteristics of the model, but also the measures implemented in terms of access, interaction, output filtering and usage monitoring. This analysis can’t be limited to referring to assessments made upstream on the model but must be based on specific checks relating to the system as a whole.
Another important aspect concerns the relational and informational dimension of accountability throughout the AI supply chain. When a system provider claims that the use of the system doesn’t fall within the scope of the GDPR despite the use of a non-anonymous model, CNIL recommends sharing sufficient documentation to allow users to verify this claim. This is good practice which, although not a formal obligation, has a concrete impact in terms of the allocation of responsibilities.
‘Reasonably usable means’: The true legal standard of anonymity
The criterion of “reasonably usable means” is the cornerstone of the entire assessment of the anonymity of AI models. The CNIL’s approach is particularly rigorous on this point and fully consistent with European data protection law: anonymity isn’t assessed in the abstract, nor in the light of purely theoretical scenarios, but in relation to the concrete and realistic possibilities of re-identification.
The Recommendations clarify that what is technically possible in extreme or academic conditions is not relevant, just as it’s not sufficient to limit the analysis to the “ordinary” or declared usage scenario of the provider. The assessment must be made in an intermediate area, based on objective criteria, taking into account the capabilities of a realistic attacker and the overall context in which the model is developed and used. It is precisely this balance that makes the notion of reasonably usable means an eminently legal category, rather than a mere technical variable.
- From this perspective, the risk of re-identification doesn’t arise from the model considered as an isolated object, but from the information ecosystem it operates in. The possibility of combining the information obtainable from the model with additional data, whether publicly available or otherwise accessible, directly affects the assessment. Identifiability is the result of a correlation, not a single source. This approach is consistent with the established interpretation of the GDPR, which requires consideration of all available means, not just the intrinsic characteristics of the processing.
- Another key element is the cost and time required to extract personal data. CNIL recognises that attacks requiring disproportionate resources, highly specialised skills or timeframes incompatible with realistic use and may, in certain circumstances, not be considered reasonably usable means. But this assessment is by definition dynamic. What appears complex or burdensome today may quickly become accessible in a context of accelerating attack techniques and the spread of pattern analysis tools. As a result, it’s impossible to crystallise anonymity in a snapshot of the state of the art frozen in time.
- Particularly significant is the attention paid to unauthorised parties’ access to the model. The analysis cannot be limited to legitimate users or use cases envisaged by the provider, but must include the possibility – far from remote – of unauthorised or non-compliant access. In this sense, CNIL reiterates a principle of great practical importance: merely restricting access doesn’t guarantee the anonymity of the model. Contractual, organisational or access control measures may reduce the likelihood of re-identification, but they don’t automatically render it insignificant.
- The assessment of reasonably usable means must also take into account the deployment context. A model made accessible to the public, or integrated into a service usable by an indefinite number of users, has a structurally different risk profile than a model used in an internal and controlled environment.
The conclusion of the argument is particularly clear from a legal point of view. A model can only be considered excluded from the scope of the GDPR if, in light of the reasonably usable means, the probability of extraction or inference of personal data is insignificant. It’s not sufficient for this probability to be low, limited or difficult to achieve: the required standard is deliberately high and reflects the centrality of the principle of accountability in European law.
Anonymity as an intrinsically unstable condition
One of the most relevant aspects of the approach outlined by CNIL concerns the inherently non-definitive nature of the analysis of the anonymity of AI models and systems. The conclusion that the probability of re-identification is insignificant cannot be taken for granted once and for all, but must be constantly reviewed in light of technological, scientific and operational developments. Anonymity isn’t a static property of the model, but rather a contextual and temporal assessment, subject to review.
CNIL expressly clarifies that a model or system initially considered outside the scope of the GDPR may subsequently fall within its scope. This may occur, for example, in the presence of new attack techniques, previously unknown vulnerabilities or a significant change in the state of the art that makes previously unrealistic methods of extracting personal data feasible. The decisive factor is not the correctness of the original analysis, but the provider's ability to recognise and manage the evolution of risk over time. This approach gives rise to a specific obligation of continuous review. Model and system providers have to periodically verify the validity of the assessments carried out, considering both developments in scientific research and the operational experience gained in the practical use of the model. From this perspective, compliance doesn’t end at the design or production stage but accompanies the entire life cycle of the technological artefact.
Incident management is a crucial test in this context. If personal data is extracted – or even if there’s a reasonable likelihood that this has occurred – the provider must assess whether the conditions for a personal data breach under the GDPR have been met. The fact that the model was legitimately classified as anonymous on the basis of a diligent analysis does not, in itself, exclude the existence of a data breach. What matters, from a legal point of view, is the ability of the provider to react correctly to the event, documenting it and, where appropriate, activating the notification and communication obligations provided for in Articles 33 and 34 of the GDPR.
CNIL takes a balanced approach on this point. The emergence of vulnerability does not automatically imply liability on the part of the provider, provided that the initial analysis was based on the state of the art and adequately documented. But the legitimacy of the provider's position will depend crucially on the timeliness and adequacy of the measures taken following the incident. Once anonymity has been called into question, a structured response is required, not a merely formal defence. This approach is closely aligned with the broader European regulatory framework, and in particular with Regulation (EU) 2024/1689 (the AI Act).
Although operating on different levels, the GDPR and the AI Act share a convergent vision of compliance as a dynamic process. The post-market surveillance, risk management and incident reporting obligations for high-risk AI systems are consistently reflected in CNIL's methodology, including in terms of personal data protection. Continuous monitoring of the model's ability to “remember” individuals becomes a structural element of AI governance.
Ultimately, the methodology proposed by CNIL redesigns how AI model compliance is conceived. Anonymity isn’t a statement of principle, nor a technical attribute to be applied once and for all, but an unstable legal condition that needs to be kept under constant review. In a constantly evolving technological ecosystem, trust isn’t based on assumptions, but on the ability to demonstrate, over time, that the risk to individuals' fundamental rights remains effectively below the threshold of legal relevance.
Author: Giulio Napolitano
Artificial Intelligence
Abuse of dominant position and AI training: The Google case before the European Commission
The widespread adoption of generative AI has raised complex questions concerning compliance with competition rules and the use of digital content. On 9 December 2025 the European Commission opened an antitrust investigation into Alphabet/Google to assess whether the company may have infringed Article 102 of the Treaty on the Functioning of the European Union (TFEU) by using content from web publishers and videos uploaded to YouTube to train and operate AI services such as AI Overviews and AI Mode.
The investigation also stems from complaints lodged by independent European publishers, supported by associations committed to safeguarding an open and competitive web. In June 2025, these stakeholders reported to EU institutions that the introduction of AI Overviews had led to a significant diversion of traffic away from original news websites, with reductions of up to 50% in visits to articles and, consequently, in advertising revenues. The complaints highlighted the possible existence of abuses of dominant position and discriminatory practices in access to information, stressing that Google’s position in the online search market – with a market share close to 90% in Europe – effectively makes it impracticable for many publishers to refuse the use of their content without suffering a substantial loss of visibility and economic relevance.
Based on these complaints and further investigative activities, the European Commission has identified two distinct ways in which Google may be exploiting third-party content to power its AI services while abusing its dominant position. With regard to web publishers’ content, Google uses such material to generate AI Overviews and AI Mode, integrating summaries and conversational responses directly into search results pages. AI Overviews produces automated summaries displayed above organic search results, while AI Mode operates as an interactive, chatbot-style interface capable of responding to users’ queries in a conversational manner. The Commission wants to determine whether these services rely on editorial content without adequate remuneration and without offering publishers a genuine possibility to refuse such use without losing visibility on Google Search – an issue of particular importance given that many publishers depend on search-driven traffic to sustain their business models.
Separately, Google uses videos and other content uploaded to YouTube to train its generative AI models. In this case, creators automatically grant Google the right to use their content for various purposes, including model training, without receiving any specific remuneration and without being able to opt out without restrictions on access to the platform. At the same time, competing developers are unable to access the same content, potentially creating an unjustified competitive advantage.
This distinction between textual and audiovisual data flows is crucial. Publishers’ content directly affects traffic and advertising revenues linked to search, whereas YouTube content influences competitive dynamics in the generative AI ecosystem, with significant implications for the development opportunities available to competing AI providers.
The investigation is grounded in Article 102 TFEU, which prohibits the abuse of a dominant position, and is also governed by Regulation (EC) No 1/2003, which sets out the rules for the exercise of the European Commission’s competition enforcement powers. The opening of proceedings pursuant to Article 11(6) of Regulation 1/2003 relieves national competition authorities of parallel application of EU antitrust rules, while Article 16(1) requires national courts to avoid decisions that would conflict with the Commission’s ongoing investigation.
Google has responded by emphasising the benefits of AI for citizens and businesses and has expressed its willingness to cooperate with European authorities, while cautioning that overly restrictive regulation could slow the development and adoption of AI. The Commission has reiterated that technological innovation must comply with the principles of fair competition and equitable access to data, and that AI can’t be used as a means to unjustifiably consolidate dominant market positions.
The investigation has no predetermined deadline and entails an in-depth assessment of contractual, technical and market data. Should infringements of Article 102 TFEU or Article 54 of the Agreement on the European Economic Area be established, Google could face fines of up to 10% of its worldwide annual turnover. It might also have to take corrective measures including changing content usage terms or opening fairer access channels for competing AI developers.
The practical implications are extensive: the outcome of the proceeding could redefine how digital content is monetised and exploited in the AI ecosystem, affecting publishers, content creators, and developers of models based on third-party data. This investigation represents a legal frontier in AI regulation and digital markets, constituting one of the first cases in which European rules are applied directly to commercial practices involving AI. Its outcome will have a profound impact on regulatory standards concerning competition protection in the context of AI.
Author: Giovanni Chieco
Intellectual Property
Permission to appeal granted in Getty Images v Stability AI
The dispute between Getty Images and Stability AI is one of the most significant precedents to date concerning the relationship between AI and copyright law. It’s also one of the earliest attempts to apply traditional intellectual property concepts to generative AI models. The proceedings, started in 2023 before the High Court of England and Wales, puts the use of protected works for training AI systems such as Stable Diffusion at the centre of legal debate.
Getty Images was founded in 1995 and operates globally in the creation and licensing of visual content through the Getty and iStock brands. It brought proceedings against Stability AI alleging that the Stable Diffusion model had been trained using millions of copyright-protected images without authorisation. The original claims were wide-ranging and included allegations of copyright infringement, database right infringement, trademark infringement and passing off, relating to both the model itself and its training process and the images generated by it.
Stability AI, a company specialising in the development of generative AI tools, contested the allegations. It argued that Stable Diffusion neither stores nor reproduces protected images, and that the model's development and training occurred outside the UK. Stability submitted that the copyright and database rights that Getty relies on, as rights governed by UK law, didn’t apply to the activities in question, as they had occurred outside the UK.
Over the course of the proceedings, the scope of the case was progressively narrowed. Getty significantly curtailed its claims relating to the model’s outputs. Its attempt to proceed by way of a representative action on behalf of tens of thousands of third-party photographers was rejected. And ultimately it abandoned its claims of primary copyright and database right infringement in full. This withdrawal occurred at a late stage of the proceedings, largely due to difficulties in establishing the geographic location of the training activities.
By the conclusion of the trial in June 2025, only two issues remained. The first concerned alleged secondary copyright infringement, namely, whether making files containing the so-called “model weights” of Stable Diffusion available in the UK could amount to indirect infringement. The second concerned alleged trademark infringement arising from the appearance, in certain generated images, of elements resembling Getty's watermarks.
In its November 2025 judgment, the High Court delivered a detailed decision that, overall, favoured Stability AI. As regards secondary copyright infringement, the court dismissed Getty’s claim, accepting that the files containing the model parameters could fall within the broad statutory notion of an “article” under UK copyright law, which isn’t limited to tangible objects and may include intangible digital entities. The court held that those files couldn’t be characterised as “infringing copies” since – on the judge’s interpretation – an article can only qualify as such if it has in fact incorporated, contained or stored a copy of a copyright-protected work, a condition that wasn’t met in the case of the Stable Diffusion models.
The court’s findings on trademarks were more nuanced. While the judge rejected the broader claims and dismissed the allegation based on enhanced trademark protection, she nevertheless identified extremely limited instances of infringement in relation to early versions of the model, in which certain generated images displayed watermark-like features sufficiently similar to the Getty or iStock marks.
In December 2025, the court considered the parties’ applications for permission to appeal. Getty was granted permission to appeal the dismissal of its secondary copyright infringement claim – specifically, the contention that Stable Diffusion itself constitutes an “infringing copy” of the works on which it was trained. This raises a question of law of considerable importance, namely the interpretation of the concept of “infringing copy” in the context of AI models, an issue never previously addressed by the courts and one with potentially significant implications for the generative AI sector.
By contrast, Stability AI was refused permission to appeal the court’s limited findings on trademark infringement.
Taken as a whole, Getty Images v Stability AI highlights the difficulty of applying traditional intellectual property concepts to fundamentally new technologies. The outcome of the appeal on secondary copyright infringement will be particularly significant. It could directly affect the ability to distribute and commercialise in the UK AI models trained on protected works without authorisation, contributing to the definition of the legal boundaries for the development of AI in the years to come.
Author: Maria Vittoria Pessina
AI and the future of drug discovery: Recursion case
The development of a new drug is one of the most complex and time-consuming industrial processes. On average, it takes over ten years and investments of more than USD2 billion to bring a new molecule to market. Failure rates are very high, especially in the advanced clinical phases. But AI is transforming how drugs are discovered, designed and evaluated.
In recent years, a growing number of pharmaceutical companies have started integrating machine-learning models and advanced data-analysis tools into their entire research and development pipelines. The goal is to reduce time and costs, increase the probability of success and enhance decision-making quality in the early stages, before projects require particularly expensive clinical trials. One of the main advantages of AI in this field is its ability to analyse vast volumes of heterogeneous data, such as cellular images, genomic and transcriptomic information and chemical structures. This approach is particularly useful in the study of complex diseases, including cancer and neurodegenerative disorders.
The case of Recursion Pharmaceuticals is emblematic of this evolution. The Salt Lake City-based company has built its industrial model on the integration of AI, high-throughput phenotypic screening and laboratory process automation. Recursion’s platform can generate and analyse millions of cellular images, accelerating the identification of new therapeutic hypotheses by taking an integrated approach to the entire drug-discovery process. Rather than applying AI to a single phase, the company aims to coordinate data and models across all stages of research. This produces more robust biological hypotheses and significantly reduces experimental timelines compared with traditional standards.
Early results show a substantial compression of timelines in the initial phases, which are characterised by a high degree of automation. If these programmes demonstrate clinical efficacy in more advanced stages, this model could establish itself as a benchmark for the entire sector.
While Recursion is among the pioneers of this approach, it’s not an isolated case. An increasing number of pharmaceutical companies are investing in proprietary AI platforms or entering into collaborations with technology startups and high-performance computing specialists. Today, AI is used to design new molecules, predict interactions between drugs and biological targets, analyse data and improve the design of clinical trials.
Despite these promising prospects, the adoption of AI in drug development still presents significant challenges. The quality and availability of data remain central issues, as do the transparency and interpretability of models – particularly important in a highly regulated sector – and the actual ability of algorithms to translate statistical correlations into tangible clinical benefits.
But the path forward is now clearly defined. AI is no longer a marginal experiment, but a strategic tool set to have a profound impact on the organisation, costs and underlying logic of pharmaceutical research.
Author: Noemi Canova
The “Elfo di Babbo Natale” case: Copyright protection and the boundaries of plagiarism
The Venice Court was asked to clarify the scope of copyright protection and to identify the elements that distinguish lawful inspiration from plagiarism and counterfeiting.
Background of the dispute
The case arose from interim proceedings brought before the Venice Court by a US company specialising in the creation of Christmas characters, stories and traditions, best known for The Elf on the Shelf phenomenon. The project consists of an illustrated book first published in 2005 and a doll depicting the main character, the “Scout Elf,” registered as a copyright in the US in 2009 and sold to consumers in boxed sets or kits. The book and the doll have been marketed in Italy since 2020 through a local distributor.
The claimant argued that both the book L’Elfo di Babbo Natale and the accompanying plush elf doll – sold either individually or in kits from 2023 via the website of one of the defendants – unlawfully copied The Elf on the Shelf. Proceedings were brought not only against the distributor, but also against the consortium involved in pre-litigation negotiations, the manufacturer of the contested doll and the company that authored and published L’Elfo di Babbo Natale.
According to the claimant, the challenged book reproduced all the key and distinctive elements of The Elf on the Shelf, while the plush doll displayed stylistic and physical similarities to the Scout Elf character. The claimant also alleged unfair competition, claiming that the defendants had taken undue advantage of the reputation of the work and of its broader business project, with a risk of dilution of The Elf on the Shelf brand.
Is The Elf on the Shelf protected by copyright?
The court reiterated well-established principles of copyright law: copyright doesn’t protect ideas as such, but only the creative form in which those ideas are expressed. As a result, the same underlying idea may legitimately give rise to multiple works, each protected – or not – depending on the author’s individual creative contribution.
In literary works, protection extends to both the external form (style, language, illustrations) and the internal form (narrative structure, plot and characters), while the mere concept or subject matter remains outside the scope of protection.
Although The Elf on the Shelf draws on a well-known Christmas tradition with mythological roots, the court found that it displays sufficient creativity to qualify for copyright protection. Its originality lies in the specific narrative format: a short, rhyming story written in simple language, illustrated with warm-toned watercolours and centred on the Scout Elf, portrayed as Santa Claus’s silent observer and messenger.
Comparing the two books
Following a side-by-side analysis, the court ruled out both plagiarism and counterfeiting by L’Elfo di Babbo Natale.
In doing so, the court clarified that plagiarism involves the parasitic reproduction of the creative elements of another work, while counterfeiting occurs when a work is reproduced with only minimal, non-creative changes designed to disguise copying. Which infringement is at issue depends on whether the author’s economic rights or their right to authorship is affected.
Turning to the substance of the comparison, the court highlighted clear differences between the two books. These differences concern both the external form – such as the use of digital illustrations, brighter colours and a longer, more complex structure – and the internal form. While The Elf on the Shelf is built around the “good children versus bad children” narrative and the elf’s monitoring role, L’Elfo di Babbo Natale tells a more elaborate story in which the elf is presented as a “magic little brother,” a friendly companion with a supportive and non-judgmental role.
According to the court, the two works merely share a general idea, which is not protected by copyright, but diverge in their essential expressive choices, reflecting distinct creative visions.
The Scout Elf doll
The court carried out a separate assessment of the doll. It noted that the claimant hadn’t clearly identified the legal basis on which copyright protection was sought, nor demonstrated – if the doll were to be classified as an industrial design – the artistic value required under Italian copyright law.
Even when considered as a figurative depiction of a fictional character, the court found the doll to lack sufficient originality, as it combines features typical of children’s toys with elements drawn from the traditional imagery of Christmas elves. In any event, the contested plush doll differed significantly in size, materials, shapes and aesthetic details, making any claim of counterfeiting untenable.
No unfair competition
The court also dismissed the unfair competition claims. It found no slavish imitation and no risk of confusion for the average consumer, particularly in light of the different packaging and presentation of the products.
Likewise, parasitic competition was excluded, as the defendants’ conduct didn’t amount to a systematic and ongoing exploitation of the claimant’s business initiatives.
The ruling
Finding no fumus boni iuris in relation to either copyright infringement or unfair competition, the Venice Court dismissed the interim application in its entirety and ordered the claimant to bear the legal costs incurred by all defendants.
Author: Lara Mastrangelo
Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo Bardelli, Carolina Battistella, Noemi Canova, Gabriele Cattaneo, Giovanni Chieco, Maria Rita Cormaci, Camila Crisci, Cristina Criscuoli, Tamara D’Angeli, Chiara D’Onofrio, Federico Maria Di Vizio, Enila Elezi, Nadia Feola, Laura Gastaldi, Vincenzo Giuffré, Nicola Landolfi, Giacomo Lusardi, Valentina Mazza, Lara Mastrangelo, Maria Chiara Meneghetti, Giulio Napolitano, Andrea Pantaleo, Deborah Paracchini, Maria Vittoria Pessina, Marianna Riedo, Tommaso Ricci, Marianna Riedo, Rebecca Rossi, Roxana Smeria, Massimiliano Tiberio, Federico Toscani, Giulia Zappaterra.
Articles concerning Telecommunications are curated by Massimo D’Andrea, , Matilde Losa and Arianna Porretti.
For further information on the topics covered, please contact the partners Giulio Coraggio, Marco de Morpurgo, Gualtiero Dragotti, Alessandro Ferrari, Roberto Valenti, Elena Varese, Alessandro Boso Caretta, Ginevra Righini.
Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.
You can learn more about “Transfer,” the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA)
If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani