Abstract_colorful_lines_PPT

24 April 2026

Can AI-generated content infringe pre-existing copyright works?

The state of play in the UK, EU and China

In the debates about copyright infringement occasioned by generative AI models, the focus has mostly been on infringement occasioned by training (which we explored in our previous article in this series). But no less concerning for some copyright owners has been infringement resulting from the AI model outputs themselves. Such outputs, which can include text, image, audio and video content, can be identical or highly similar to pre-existing works. Indeed, some of the fervour around generative AI has been its ability to create works ‘in the style of’ iconic creators, sidestepping the need to involve, pay or sometimes even acknowledge such creators. Naturally, these developments have been controversial and have given rise to several thorny legal issues.

 

When an AI model produces an output, could this result in infringement of an existing copyright work?

Chinese courts were among the first in the world to conclude that the answer can be yes. One case involved a claim for copyright infringement brought by the Chinese licensor of the “Ultraman” character from Japan, in which the defendant’s AI model could be prompted to generate images identical or highly similar to the character. The Guangzhou Internet Court agreed that the defendant had infringed copyright in the character1, with two courts in Hangzhou reaching the same conclusion in a similar case brought by the same plaintiff2.

In the UK, in the Getty Images case3, one of the initial claims was that the defendant had infringed the claimant’s right of communication to the public, by making their AI model available in the UK where it could be prompted to generate an image reproducing all or a substantial part of one of the claimant’s copyright assets. However, as the claimant dropped this claim at trial, we lost the opportunity to have the court’s decision and comments.

In the EU, jurisprudence is beginning to illustrate how the general principles of copyright infringement apply to AI-generated outputs. If an output reproduces a copyright work, this may infringe the rights of reproduction and communication to the public; this was the finding of a German court in a case involving an AI model that reproduced certain song lyrics. Where the output adapts the original and constitutes a derivative work, this may infringe the right of adaptation. Less certain is whether the right of reproduction is infringed if the original work (or part of it) cannot be recognised in the AI-generated output. Indeed, in line with the recent judgment of the CJEU in the Mio/Konektra case4, in order to find that there has been a copyright infringement, copied original choices must be recognizable in the infringing work. This test is perfectly applicable to AI outputs. The CJEU has been called upon to rule on some of these questions, and will have to establish the EU law position in relation to the existence of reproduction and communication to the public in the display by a chatbot of the (partial) text of news articles in response to a user request to summarize the contents of a webpage5.

 

Does the plaintiff have to prove that the work was used to train the AI model? And what if they can’t?

Even when an AI model’s output is identical or highly similar to a plaintiff’s copyright work, it can be difficult for plaintiffs to know, let alone prove, that the work was among the data used to train the model. In the UK, EU and China, this difficulty might not be fatal for a plaintiff’s claim of copyright infringement. Naturally, evidence from the plaintiff that the work was used, if available, will strengthen a claim.

Where the degree of similarity is particularly high, courts may place the burden on the defendant to prove that the work was not used for training. In France in December 2025, senators introduced a bill establishing a presumption of exploitation of cultural content by AI providers where there is a plausible indication that protected content may have been used, thus reversing the burden of proof in favour of copyright holders. The proposal has not yet been put to a vote, and its conditions of application are still uncertain6.

Emerging AI transparency obligations may support plaintiffs looking to bring claims. For example, the EU AI Act obliges providers of general-purpose AI models to make available a sufficiently detailed summary of training content to facilitate copyright holders to enforce their rights7, and provided a template summary in July 2025. Potentially, such a summary might identify a particular database in which a plaintiff can prove their work has been included. 

 

Who bears liability for infringing AI outputs?

As with the training issue, there are various potential answers depending on the circumstances.  

Primarily, it would be the AI model provider who would bear liability in the first instance. This entity may or may not be the same entity responsible for training the model (and see above regarding the relevance of proving that the copyright works in question were used in training). Courts in Germany and China8 have been willing to attribute liability to providers even when it is users, and not the providers themselves, who are responsible for the prompts that resulted in the infringing outputs.

In theory, AI model users who intentionally create infringing outputs may be liable in their own right, particularly when they are involved in the training process (as can be the case for models specifically made available for users to train with data of their choice). Liability may also fall on users who distribute infringing outputs. However, we are not aware of any cases in the UK, EU or China confirming the liability of a user in these circumstances.

Importantly, where financial exposure falls may be affected by the contractual arrangements between the parties, including terms and conditions with end users. Some AI models intended for retail or enterprise use contain indemnities given by the providers protecting the end users from liability for infringement of third-party copyright and other IP.

 

What about moral rights?

We are not so far aware of any relevant court cases in the UK, EU or China, but in theory, AI outputs could infringe the moral rights of third parties. For example, the right of attribution may be infringed if an output identical or substantially similar to a third-party work does not appropriately acknowledge the authorship, or if the work is presented as an original creation of the AI model. The right of integrity could also be infringed if an output constitutes a deformation or mutilation of an original work in a manner that affects the reputation of the author or its legitimate interests. Differences between jurisdictions will affect the claims available to plaintiffs.

 

What about when an AI model is prompted to produce an output ‘in the style of’ a third-party?

It is one thing for an AI output to be identical or substantially similar to a particular third-party’s work. It is another thing for an AI output to be merely ‘in the style of’ a third-party (such as an author or artist), such that the work ‘could have been’ produced by that third-party without resembling any of their actual works. Some prominent artists have objected to the use of AI to create works in their style.

This question engages several complex issues. The first is whether an AI-generated image can constitute a derivative work. The Guangzhou Internet Court in the “Ultraman” case in China found that the plaintiff’s right to prepare derivative works had been infringed, but the same finding might not be reached in all cases, and an AI output containing a recognisable IP-protected character is not necessarily the same as an AI output that merely reflects the “style” of a particular third-party.

Whether copyright law protects an artist’s unique “style” is an issue not just for AI but for IP more generally. Traditionally, copyright law aims to protect the expression of an idea, not the idea itself. Thus, copyright infringement is difficult to prove unless the alleged infringing work in question is substantially similar to a particular work of the claimant.

As with other issues relating to AI and copyright, the legal position here may evolve as AI-generated content becomes more widespread. In the meantime, claimants aggrieved by the use of AI-generated works “in their style” might look beyond copyright law for legal recourse. Trade mark protection could apply if elements of the style in question are protected as trade marks or are otherwise distinctive, but naturally this can be limited. In particular, if AI-generated content is being falsely claimed as originating from a claimant, protections against passing off and unfair competition could apply (not to mention the moral rights arguments mentioned above), and if products and services are being sold, avenues via consumer and advertising law might apply also.

 

Practical measures to mitigate copyright infringement risks in AI outputs

Although the legal position in China, the UK and the EU remains unsettled and unaligned, some practical recommendations can be made:

  • Before using an output generated by an AI model, conduct clearance to determine the risk of third-party IP infringement. It may be that alterations are required, or alternatively that licences or permissions should be sought.
  • Before the copyright law position settles, other legal and regulatory obligations may apply. For example, in the “Ultraman” case in China, the court found that the infringing images generated by the defendant’s AI model not only resulted in copyright infringement on the defendant’s part, but had also violated the provisions of China’s July 2023 Interim Measures for the Management of Generative Artificial Intelligence Services, including obligations to respect third-party IP rights9.
  • AI model developers or providers can consider imposing filters into AI models to avoid the models producing infringing content, such as blocks on certain proprietary names, characters, trade marks or other identifying points. In some territories (such as China), such filters are already to an extent required by law. However, given the myriad of ways in which infringement can occur, not all of which can be clearly foreseen, filters alone may be ineffective. Further, even the best filters have their limitations, and can sometimes be circumvented by inventive user prompts. In China, the court still found the defendant liable in the “Ultraman" case despite the filtering measures they had taken, as it was shown these measures were not entirely effective10.
  • In agreements relating to AI models which could produce infringing content, ensure the agreement contains appropriate representations, warranties and indemnities regarding infringement of third-party IP. This applies to agreements in which one party will use AI to produce content for another party (such as an agency using AI to create marketing content). It also applies to agreements in which one party is providing an AI model for the other party’s own use (such as a business or user licensing an AI model for enterprise or personal use). In the latter case, some AI model developers have in fact advertised that they will provide indemnities for infringement occasioned by use of their models.
  • If you are a rightsholder, the proliferation of AI-generated infringing content might necessitate changes in infringement monitoring and enforcement practices. For example, infringing content circulating on ecommerce or social media may be downstream of one or more popular AI-content generation tools, requiring enforcement action in more places than before.
  • Where an AI model is being used in an enterprise context, where employees and consultants are using an AI model to generate content, consider implementing policies and training to help alert personnel to the copyright risks and mitigate potential liability risks. This might include guides on appropriate and inappropriate prompts.
  • If you specifically want to use AI to produce works ‘in the style of’ a particular artist, consider proactively obtaining the relevant licences or permissions so as to reduce the risk of liability.
  • When assessing infringement risk for AI outputs in the EU, consideration should also be given to whether the output may qualify as a pastiche. The availability of this defence will depend on an objective assessment of whether the output engages in a recognisable artistic or creative dialogue with the pre-existing work, and will not extend to outputs that are mere concealed imitations. Even where the pastiche criteria appear to be met, the three-step test must also be satisfied, which may limit the defence’s availability for commercial AI applications.
  • Where an AI model results in infringing outputs, consider whether the training of the model requires adjustment. On training, see the first article in our series.

This article touched on how AI-generated outputs can potentially infringe third-party copyright. But what about copyright in the outputs themselves? The third and final article in our series will explore this.

If you have questions about anything raised in this article, please get in touch with the authors or your regular DLA Piper contact. 


Guangzhou Internet Court 8 February 2024 ruling (2024) Yue 0192 Min Chu No. 113 regarding Shanghai Character License Administrative Co., Ltd. and “Ultraman”
Hangzhou Intermediate People's Court December 2024 ruling (2024) Zhe 01 Min Zhong No. 10332 regarding Shanghai Character License Administrative Co., Ltd., and “Ultraman”; appeal from Hangzhou Internet Court ruling (2024) Zhe 0192 Min Chu No. 1587
Getty Images (US) Inc and others v Stability AI Ltd [2025] EWHC 2863 (Ch)
Mio AB and others v Galleri Mikael & Thomas Asplund Aktiebolag (C‑580/23, C‑795/23)
Like Company v Google Ireland Limited (C-250/25, preliminary reference dated 3 April 2025 from the Budapest Környéki Törvényszék)
French Senate, Proposition de loi no. 220 (2025-2026), Proposition de loi relative à l’instauration d’une présomption d’exploitation des contenus culturels par les fournisseurs d’intelligence artificielle, introduced on 12 December 2025
Article 53, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (EU AI Act)
Hangzhou Intermediate People's Court December 2024 ruling (2024) Zhe 01 Min Zhong No. 10332 regarding Shanghai Character License Administrative Co., Ltd., and "Ultraman"; appeal from Hangzhou Internet Court ruling (2024) Zhe 0192 Min Chu No. 1587
Guangzhou Internet Court 8 February 2024 ruling (2024) Yue 0192 Min Chu No. 113 regarding Shanghai Character License Administrative Co., Ltd. and “Ultraman”
10 Guangzhou Internet Court 8 February 2024 ruling (2024) Yue 0192 Min Chu No. 113 regarding Shanghai Character License Administrative Co., Ltd. and “Ultraman”

Print