
19 March 2026
TDM and training AI models: UK Government publishes report and impact assessment on copyright and AI
In the UK, we are a small step closer to clarity on the interplay between copyright and AI following the publication on 18 March 2026 of the UK Government’s report on the use of copyright works in the development of AI systems (Report) and related economic impact assessment (Impact Assessment). However, neither the Report nor the Impact Assessment changes the law, addresses the legality of AI training under UK copyright or sets out any proposed changes. Instead, the Report and Impact Assessment emphasise balance, the need for further evidence and caution in order to inform future policy developments. The Government recognises both the economic and societal importance of the UK’s creative industries and the growth potential of AI, but concludes that there is currently insufficient evidence to justify immediate reform to existing laws. Rather than endorsing one of the consultation options now, the Report positions monitoring of international developments and further information gathering as the appropriate next steps for the Government prior to any future change to the Copyright Designs and Patents Act 1988 (CDPA). The Report and Impact Assessment do not in themselves address any of the current legal challenges: most importantly, whether and how use of copyright works to train AI constitutes copyright infringement. Nonetheless, they represent the clearest statement so far of the Government’s position on these issues, and set the agenda for any future reform.
Background and context
As in many territories around the world, the copyright issues raised by AI have been the subject of intense discussion in the UK for several years.
One of the key issues in the debate is whether the UK’s existing exception to copyright infringement for text and data mining (TDM) in section 29A of the CDPA (Existing TDM Exception) should be retained or amended, with options including bolstering protections for copyright owners on the one hand, or for parties carrying out AI training on the other hand. The Government carried out a public consultation on the issues between 17 December 2024 and 25 February 2025 (Copyright and AI Consultation), and, following responses to the consultation, issued a statement of progress on 15 December 2025. See our comments on these previous developments here, here and here. Publication of the Impact Assessment and Report were obligations of the Government under sections 135 and 136 respectively of the Data (Use and Access) Act 2025 (DUAA).
Stakeholders on all sides are keen to obtain legal certainty on these issues. Widespread attention has been focused on the Getty Images v Stability AI case, however many of the most interesting claims in that suit were dropped at trial, and any appeal of the High Court’s 4 November 2025 judgment will likely not be heard for many months (see our comments here).
On 6 March 2026, the House of Lords Communications and Digital Committee published its own report on AI, copyright and the creative industries (HoL Report). In it, the Committee urges a “licensing‑first” approach to generative AI, rejecting any broad new commercial TDM exception or opt‑out regime and advising against reforms to the CDPA that would weaken incentives to license training data. Instead, it calls for statutory, granular transparency obligations on AI developers (including confidential regulator‑facing disclosures), support for provenance, watermarking and labelling standards, and clear expectations that AI developers operating in the AI obtain appropriate licences.
TDM exception options
To recap, in the Copyright and AI Consultation, the Government consulted on four options to retain or amend the Existing UK TDM Exception. These options were:
- maintain the status quo (i.e. do nothing and retain the Existing TDM Exception) (Do Nothing (Option 0));
- strengthen copyright by making it a requirement to have a licence to use copyright protected works in all circumstances (Strengthen Copyright (Option 1));
- introduce a broad data mining exception without an “opt out” (Broad Exception (Option 2)); or
- introduce a broad data mining exception that allows rights holders to reserve their rights, underpinned by supporting measures on transparency (Broad Exception with “opt out” (Option 3)).
The Government has, based on perceived evidentiary paucity and international divergence, declined to endorse any one of the options set out above and its initially preferred option ahead of the consultation phase (Option 3) is no longer the preferred way forward. This means that there is now no current proposal for a specific copyright exception for AI training. The Report notes that strong opposition was received to Option 2 and Option 3 from the creative industries and that AI developers expressed clear concerns that Option 0 and Option 1 could reduce the UK’s competitiveness in a global AI marketplace. The Report and Impact Assessment believe that existing evidence is currently limited, with particular regard given to important developments in the wider market since the close of the consultation, the burgeoning body of caselaw arising from litigation (in the UK and globally), developing licensing markets and evolving technical standards. Alternatives to the four original options have also been mooted by the Report, including: (i) regime focused or sector‑specific exceptions (e.g. research‑only); (ii) drawing distinctions between content used in generative and non‑generative AI; and (iii) statutory licensing (as is currently being proposed in India). While these alternatives are discussed, none are recommended at this stage and, as with the original formulations, are noted as requiring substantial further evidence and developed coordination with relevant industry voices – including consideration of the HoL Report.
Transparency over the content and data used to develop AI systems
The Government agrees with the reported strong and wide stakeholder support for greater transparency, albeit without mandating any specific formalities or requirements. In keeping with the overall thrust of the Report, the Government will monitor: (i) US and EU regimes; (ii) industry practice; and (iii) worldwide legislative efforts, to inform any future potential legislation. The Government also proposes to work with industry and experts to develop best practice.
Labelling of AI and human-created content
The Report notes consensus that there should be some form of labelling of AI content, and the lack of any current UK law directly mandating it. However, a clear approach on how to support labelling in practice has not been determined. In order to address competing suggestions e.g. to label all AI content, or only wholly generated AI content, the Government proposes to work with industry and experts to develop best practice.
Technical tools and standards
The Report highlights the importance of technical tools and standards to mediate AI’s access to online content, reviewing the role of robots.txt standards and other measures. However, it suggests that, while the marketplace for these measures is growing rapidly, they may not currently support stakeholder needs and there are challenges with adoption and compliance. There is also a lack of consensus on the need for Government intervention. Accordingly, the Government again proposes to work with industry and experts to develop best practice.
Licensing
The Report notes the individual and collective licensing markets developing in the UK and elsewhere. The new and evolving nature of these markets means that the Government does not propose to intervene in them at this stage. Instead it proposes to monitor developments in the UK and elsewhere. The Government also suggests a role for its Creative Content Exchange in testing commercial models for licensing of certain data sets.
Enforcement
The Government confirms that it will continue working collaboratively with partners from a range of sectors, as well as with law enforcement and the judiciary, to seek to ensure there are proportionate, effective and accessible routes of redress for infringement in the context of AI.
Computer generated works
The Government continues to hold the view that, in the absence of stakeholder evidence of its ongoing value, that the current UK copyright protection for computer generated works (as defined in s178 CDPA) should be removed (see here for more information on computer generated works and AI).
Digital replicas
The Report recognises the potential for harm where an individual’s voice or likeness is replicated without their consent. It proposes further consultation to "explore a range of options for addressing these risks" without restricting legitimate innovation. This expressly includes the potential for a specific new digital replica or personality right.
What’s next?
The Report and Impact Assessment are geared towards further evidence‑gathering rather than immediate reform. The Government intends to monitor active litigation in the UK courts as well as globally given the UK’s place in worldwide content and development markets. There will also be continued monitoring of international regulation, technical standards and licensing markets, together with commissioned research and sector engagement to refine the Government’s understanding of the impact of AI across the UK economy. Legislative change to the CDPA remains possible, but any such change has been framed as contingent on clearer evidence and greater legal and commercial certainty, and is unlikely to be imminent given a greater emphasis on the potential for industry initiatives. It should be noted that by publishing the Report and Impact Assessment, the Government has discharged its statutory obligations under the DUAA, and any further action on copyright and AI will be a matter of policy choice and stakeholder pressure rather than legislative requirement.
If you have questions on this article or on anything relating to AI and copyright, please contact the authors or your usual DLA Piper contact.