
23 March 2026
Innovation Law Insights
23 March 2026Podcast
AI training and personal data: What the EU Digital Omnibus really changes
The EU Digital Omnibus Package might introduce one of the most important clarifications in European digital law: how personal data should be interpreted when training AI systems. Watch the video from Giulio Coraggio on the topic here.
Artificial Intelligence
EU council moves first: New Digital Omnibus draft on changes to the AI Act released
The EU Council has now approved its position on the Digital Omnibus package introducing changes to the AI Act. And the message is quite clear; even before the AI Act is fully applicable, it’s already being adjusted.
This isn’t unusual in EU law, but the timing is telling.
The AI Act was designed as a future-proof framework. But the Council’s latest drafts show that practical implementation challenges are already pushing policymakers to recalibrate key elements.
In other words, this isn’t just a simplification exercise – it’s an early correction of the regulatory trajectory.
A clear signal: New AI practices may become prohibited
One of the most striking elements in the Council-approved drafts is the introduction of a specific prohibition on AI systems used to generate non-consensual intimate content and child sexual abuse material.
From a legal standpoint, this is more than a clarification.
It shows that the category of “unacceptable risk” AI is evolving, and may continue to expand as new use cases emerge.
For companies working with generative AI, this raises a practical issue: compliance cannot be based only on today’s classification, but must anticipate how risk categories may shift over time.
More time to comply… but no less pressure
The Council also proposes delaying the application of rules for high-risk AI systems:
- 2 December 2027 for stand-alone systems
- 2 August 2028 for systems embedded in products
At first glance, this looks like good news for businesses.
But in reality, it’s more of a repricing of time than a relaxation of obligations.
The substance of the rules doesn’t change. What changes is the expectation that companies will use this additional time to build robust governance frameworks.
Those who interpret the delay as a reason to wait may find themselves underprepared when enforcement begins.
Transparency is back on the table
Another important element in the Council’s draft is the reinstatement of the obligation to register AI systems in the EU database for high-risk systems, even where providers believe their systems aren’t high-risk.
This is quite a shift.
It means that classification decisions are no longer purely internal. They may become visible, challengeable, and subject to scrutiny.
In practice, this could:
- push companies toward more cautious risk assessments;
- require stronger internal documentation; and
- expose borderline cases to regulators earlier than expected.
Sensitive data: A more restrictive approach returns
The Council drafts also bring back the “strict necessity” standard for processing special categories of personal data in AI systems.
This is particularly relevant for sectors such as healthcare, HR, and biometrics.
The implication is straightforward: using sensitive data in AI systems will require a much stronger justification, and less room for flexible interpretations.
It also reinforces something that many companies still underestimate: AI compliance and GDPR compliance aren’t separate exercises.
What this really means (beyond the headlines)
If you step back, the Council’s position reveals a broader trend.
Yes, timelines are being extended.
But at the same time:
- the list of prohibited practices is expanding;
- transparency obligations are increasing; and
- data protection constraints are being reinforced.
So the real message isn’t “relaxation” – it’s controlled tightening with more realistic timing.
The risk I see in practice
In many organisations, AI is already being rolled out across business functions, often faster than governance structures can keep up.
This is where the real risk sits.
If legal and compliance issues are identified only after AI systems are embedded into operations:
- fixing them becomes operationally complex;
- costs increase significantly; and
- reputational exposure can escalate quickly.
The Council’s draft, in my view, implicitly acknowledges this dynamic.
This isn’t just a draft, it’s a direction of travel
It would be easy to treat the Council-approved Digital Omnibus drafts as just another step in the legislative process.
That would be a mistake.
They’re an early indication of how the AI Act will evolve in practice:
- more adaptive
- more transparent
- more demanding on governance
For businesses, the key question is no longer whether the rules will change, but whether their internal structures are flexible enough to adapt when they do.
Author: Giulio Coraggio
Joint statement on AI-Generated imagery and the protection of privacy – potential impact
On 23 February 2026, the European Data Protection Board (EDPB), within an initiative coordinated by the Global Privacy Assembly (GPA) through its International Enforcement Cooperation Working Group (IEWG), issued the “Joint Statement on AI-Generated Imagery and the Protection of Privacy,” endorsed by 61 authorities worldwide.
While the statement doesn’t have binding legal force, it clearly signals the strong and converging intention of data protection authorities to promote a common and coordinated approach to the risks associated with generative AI systems, particularly where minors are involved.
Scope and content of the statement
The joint statement focuses primarily on AI systems capable of generating realistic images and videos.
Although these tools can serve a wide range of legitimate and innovative purposes and are increasingly valuable for businesses across multiple sectors, the authorities underline the tangible risk of misuse, including the generation of non-consensual intimate imagery, defamatory content, and other harmful representations.
These risks are especially significant where children or other vulnerable individuals are involved, with potential consequences such as cyberbullying, exploitation, and broader forms of abuse.
Core principles
Without prejudice to the specific legal requirements applicable across jurisdictions, the statement identifies four overarching principles that should guide the development and deployment of AI content generation systems:
- Safeguards: the implementation of robust safeguards to prevent the misuse of personal data and the creation of harmful or non-consensual content.
- Transparency: the assurance of meaningful transparency regarding system capabilities, safeguards, permitted uses, and the consequences of misuse.
- Redress mechanisms: the establishment of effective and accessible mechanisms enabling individuals to request the removal of harmful content and ensuring prompt responses.
- Child protection: the adoption of enhanced protections for children, including the provision of clear and age-appropriate information to minors, as well as to parents, guardians, and educators.
Conclusions
Although non-binding, the joint statement underscores the extent to which data protection authorities are treating the risks associated with generative AI as a regulatory priority. When applied in the specific context of children, the articulated principles point toward particularly high standards of safety, prevention and transparency.
In practical terms, this suggests that the level of expected compliance for companies developing generative AI systems with image generation capabilities is likely to increase significantly. While many of these requirements are already reflected in existing frameworks – most notably under the AI Act – the approach adopted by data protection authorities and other regulators may drive organisations towards even higher standards. In particular, increasing emphasis is likely to be placed on adopting concrete technical measures capable of effectively preventing the generation of unlawful or harmful imagery, including systems designed to detect, block, or otherwise inhibit the creation of content that infringes individual rights. At the same time, organisations will be expected to provide clear, accurate and accessible information to ensure the proper use of such systems, especially where minors are concerned.
In this respect, the statement can be read as an early indicator of the approach that’s likely to be adopted, whereby compliance may no longer depend solely on formal adherence to legal requirements, but also on the proactive implementation of by-design and by-default safeguards, to reach the highest standard of protection for children.
Finally, considering that many jurisdictions are increasingly adopting more stringent measures concerning minors – ranging from restrictions on access to certain online services to the implementation of technical age verification mechanisms – it will be important to monitor future legislative developments. These may include the introduction of additional regulatory provisions specifically aimed at further enhancing the level of protection afforded to children in the deployment and use of generative AI technologies.
Author: Federico Toscani
Blockchain and Cryptocurrency
Algorithmic trading: Supervisory clarifications on the interplay between MiFID2 and the AI Act
On 26 February 2026, the European Securities and Markets Authority (ESMA) published a Supervisory Briefing on algorithmic trading (Briefing) in the EU with the stated objective of fostering greater supervisory convergence in the application of Directive (EU) 2004/39 (MiFID II) framework.
The Briefing is non-binding and isn’t subject to a comply-or-explain mechanism, but it s clearly intended to guide both national competent authorities and market participants in the interpretation and supervision of the rules on algorithmic trading.
More specifically, the Briefing supports the application of Delegated Regulation (EU) 2017/589 (RTS 6), with particular attention to those areas where ESMA has identified divergent supervisory approaches or insufficiently robust market practice.
Substantively, the Briefing performs two functions.
- First, it clarifies the scope of algorithmic trading by refining the meaning of key concepts that are central to the MiFID II regime, including “algorithm,” “algorithmic trading,” and “algorithmic trading strategy.” In doing so, ESMA adopts a deliberately broad and functional reading of the concept of algorithmic trading, confirming that the decisive element isn’t the total absence of human intervention, but rather the fact that a computer algorithm determines one or more individual parameters of an order other than mere routing or post-trade processing.
- Secondly, the Briefing sets out concrete supervisory expectations on governance, testing, outsourcing, annual self-assessment, and pre-trade controls, drawing on the findings of the 2024 Common Supervisory Action on pre-trade controls launched following the 2022 Nordic flash crash.
A particularly significant aspect of the Briefing concerns the growing intersection between financial regulation and Regulation (EU) 2024/1689 (AI Act). The Briefing makes clear that the use of AI within trading systems increases the importance of governance, accountability, validation, explainability and internal oversight.
Algorithmic trading under MiFID 2
A central element of the Briefing concerns the clarification of what constitutes algorithmic trading within the meaning of Article 4(1)(39) MiFID II. Under this provision, algorithmic trading refers to trading in financial instruments where a computer algorithm automatically determines individual parameters of orders. ESMA reiterates that the definition must be interpreted broadly.
Any trading activity in which an algorithm determines elements such as the timing, price, quantity or execution management of an order may fall within its scope, even where human intervention remains part of the trading process. Conversely, systems that merely provide informational signals to traders, without determining order parameters or executing trades automatically, don’t qualify as algorithmic trading; as already specified by ESMA in 2017 through Q&A 1603.
Building on this definition, the Briefing further distinguishes between the concepts of algorithm and algorithmic trading strategy, two notions that play a key role in the supervisory framework but have historically been interpreted inconsistently across jurisdictions.
- ESMA defines an algorithm as a computerized set of instructions capable of autonomously determining one or more parameters of a trading order.
- An algorithmic trading strategy, by contrast, is understood as a set of decisions implemented through one or more algorithms and designed to pursue a defined trading objective, such as market making, arbitrage or execution optimisation. This distinction is particularly relevant for supervisory purposes, as it determines how trading behaviour can be attributed to specific systems and therefore tested, monitored and subject to regulatory scrutiny.
ESMA approach on governance and testing of algorithmic trading
The Briefing also provides detailed supervisory guidance on the governance and testing of algorithmic trading systems. Under Article 17 MiFID II and the implementing provisions of RTS 6, investment firms engaging in algorithmic trading must establish robust governance arrangements ensuring that trading algorithms are properly designed, tested, monitored and controlled throughout their lifecycle.
ESMA emphasizes that these governance arrangements must clearly allocate responsibilities within the firm, ensure effective communication between trading, risk management, compliance and IT functions, and maintain an appropriate separation between trading desks and control functions. This governance framework is intended to ensure that algorithmic trading systems remain subject to effective internal oversight despite the increasing technological complexity of automated trading environments.
A key component of this framework concerns the testing of algorithms and algorithmic trading strategies.
- RTS 6 requires investment firms to conduct conformance testing, stress testing and scenario analysis before deploying algorithmic trading systems and whenever material changes occur. ESMA clarifies that a material change should be understood broadly as any modification capable of altering the behaviour, risk profile or compliance posture of an algorithmic trading system. Examples include changes to the decision logic governing price or quantity determination, modifications to execution behaviour, the extension of the algorithm to new instruments or trading venues, changes in risk control parameters, or alterations to external dependencies such as data feeds or trading infrastructure.
- The Briefing further reiterates that the use of outsourced software or third-party trading systems doesn’t alter the regulatory responsibility of the investment firm. Where algorithms or execution tools are procured from external providers, the investment firm remains fully responsible for compliance with MiFID II and RTS 6. Firms must therefore ensure that outsourcing arrangements provide sufficient transparency and control to enable them to understand, test and monitor the functioning of the algorithms they deploy, and to demonstrate compliance to the competent authority. In practice, this requires robust contractual arrangements granting access to relevant documentation, testing results and operational data relating to the outsourced systems.
Can AI-driven algorithmic trading systems be qualified as high-risk AI systems under the AI Act?
ESMA expressly notes that an algorithmic trading system may qualify as an AI system within the meaning of Article 3(1) of the AI Act. Where this is the case, the requirements introduced by that regulation must be integrated into the governance arrangements already required under RTS 6. In practical terms, this means that the organisational framework through which investment firms supervise their trading must also accommodate the regulatory obligations arising from the AI Act.
The AI Act adopts a risk-based regulatory taxonomy. At present, AI-driven algorithmic trading systems are not included among the high-risk use cases identified by the regulation. Nevertheless, the determination of what qualifies as a high-risk AI system remains subject to further regulatory clarification, and the European Commission is expected to issue additional guidance in this area. Pending such guidance, Annex III of the AI Act remains the most precise regulatory reference point for identifying high-risk AI systems, listing the sectors and use cases that are currently subject to the stricter compliance regime provided by the regulation.
The categories identified in that Annex primarily concern AI systems capable of directly affecting the fundamental rights or the access of natural persons to essential services – such as systems used for credit scoring, recruitment decisions, access to public benefits or law enforcement purposes. By contrast, algorithmic trading systems typically operate within financial markets and automate trading decisions between market participants, rather than producing decisions that directly determine the rights or opportunities of individuals.
Beyond the formal risk classification doubts, ESMA acknowledges how the growing integration of AI technologies into trading systems raises additional supervisory challenges. AI systems – particularly those relying on machine learning techniques – may evolve over time through recalibration or retraining processes, potentially altering the behaviour of a trading system without a single identifiable modification that would clearly qualify as a “material change” under RTS 6. This creates the risk that cumulative system (or model) adjustments may materially affect trading outcomes without triggering the testing procedures normally required for algorithmic systems.
ESMA emphasises that the existing MiFID II control framework already provides important supervisory tools to address the use of AI in trading. In particular:
- Article 9 of RTS 6 requires investment firms to conduct an annual self-assessment and validation of their algorithmic trading systems, including their governance arrangements and their overall compliance with Article 17 MiFID II.
- Article 2 of RTS 6 requires compliance staff to possess a sufficient understanding of the operation of the firm’s trading algorithms and to maintain ongoing interaction with personnel holding detailed technical knowledge of those systems. Where AI technologies are deployed, these requirements imply that firms must be able not only to control their trading algorithms, but also to explain how AI systems influence trading decisions and ensure that their use remains subject to effective governance, validation and internal oversight.
Digital omnibus: ECB draws supervisory lines
A further regulatory element relevant to the intersection between financial supervision and the AI Act emerges from the European Central Bank (ECB) Opinion of 13 March 2026 on a proposal for a regulation as regards the simplification of the implementation of harmonized rules on artificial intelligence (Digital Omnibus on AI). In that Opinion, the ECB stresses how the AI Act shouldn’t alter the allocation of supervisory competences in the financial sector.
- In particular, the ECB clarifies that conformity assessments, breach investigations and enforcement of the AI Act fall within the remit of national market surveillance authorities, while the ECB’s mandate remains limited to the prudential supervision of credit institutions under the Single Supervisory Mechanism.
- At the same time, the ECB highlights the need for a clear legal basis allowing reciprocal information exchange between prudential supervisors and market surveillance authorities, to avoid duplicative investigations and inconsistent supervisory outcomes where credit institutions deploy AI systems.
- Finally, the ECB also calls for greater proportionality in the classification of AI use cases in finance, suggesting that generalized linear models, such as linear and logistic regression models commonly used in credit risk assessment, shouldn’t be treated as high-risk AI systems when deployed as standalone statistical techniques, given their inherent transparency and explainability.
Taken together, these clarifications underline a broader institutional objective consisting in ensuring that the AI Act operates in a manner that complements, rather than disrupts, the established supervisory architecture of EU financial regulation, preserving both regulatory coherence and proportionality in the oversight of AI-driven financial activities. Ultimately, as AI systems progressively blur the line between static algorithms and adaptive decision-making models, one question remains unavoidable. Is the existing MiFID II supervisory framework, as it is, sufficiently equipped to capture the risks of self-evolving trading systems?
Author: Andrea Pantaleo and Giulio Napolitano
Technology Media and Telecommunication
BEREC launches a call for inputs on the Outline Work Programme 2027
The BEREC (Body of European Regulators for Electronic Communications) recently launched an early call for inputs in preparation for the adoption of the BEREC Work Programme for 2027.
The aim of the consultation is to gather feedback from stakeholders regarding the Outline BEREC Work Programme 2027, which is the draft Work Programme for 2027 that, as every year, BEREC must publish by 31 January of the year preceding the one to which the draft refers.
The Outline BEREC Work Programme for 2027 is based on the BEREC Strategy 2026-2030, which identifies the high-level strategic priorities intended to guide BEREC's activities and on the goals set out in Art. 3(2) of Directive 2018/1972 (European Electronic Communications Code – EECC), namely:
- promoting connectivity and access to very high-capacity networks (VHCN);
- promoting competition and efficient investment;
- contributing to the development of the internal market; and
- promoting the interests of the citizens of the EU.
The draft published by BEREC consists of various sections, including the high-level strategic priorities, cooperation with EU institutions and institutional groups, BEREC’s tasks under the EU legislation, projects brought forward by BEREC in 2026, and potential work for 2027 and beyond.
The draft submitted for public consultation first outlines the high-level strategic priorities, which had already been considered in previous work programmes and have been updated for 2027, namely:
- Promoting full connectivity and the digital single market, particularly by improving the conditions for the expansion and take-up of secure, resilient, competitive, and reliable very high-capacity networks (both land and undersea, fixed and wireless) across Europe.
- Supporting open and competition-driven digital ecosystems, in particular by fostering innovation, investment and user protection, as well as continuing to contribute to the implementation of the Digital Markets Act (EU Regulation 2022/1925 – DMA), by cooperating with the European Commission on the implementation of interoperability obligations for number-independent interpersonal communications services provided by gatekeepers.
- Empowering end-users in the context of a fast-evolving digital ecosystem, with particular attention to reducing the digital divide related to emerging technologies and to the implementation of the Open Internet Regulation (Regulation (EU) 2015/2120).
- Contributing to the development of sustainable, secure and resilient digital infrastructures, ensuring the continuity of communication services through coordinated strategic, operational and technical measures.
- Strengthening BEREC's capabilities and continuous improvement, by promoting greater efficiency, high-quality deliverables, transparency and environmental sustainability, simplifying bureaucracy and promoting the harmonisation of data collection across the EU to minimize the administrative burden and strengthen the internal market.
BEREC then goes on to describe its institutional and international cooperation activities, highlighting the importance of dialogue with European institutions, competent authorities in third countries and international organisations, which is essential to ensure the consistent and effective implementation of EU rules in the electronic communications sector.
In the document under public consultation, BEREC also provides an overview of its tasks under European legislation. In this context, BEREC continuously engages in activities related to the implementation of: (i) the provisions set forth by the EECC, (ii) the Open Internet Regulation (EU Regulation 2015/2120) and BEREC’s Open Internet Guidelines, (iii) the Roaming Regulation (EU Regulation 2022/612) and intra-EU electronic communications, (iv) the provisions set forth by the DMA.
BEREC provides information regarding its future commitments with particular regard to the continuation of projects initiated in previous years and included in the BEREC Work Programme 2026 (such as the adoption of the “BEREC Report on the application of fair and reasonable pricing within the SMP framework,” scheduled for the second quarter of 2027, and the adoption, also expected in 2027, of the “BEREC Report on access conditions to state-aid funded networks”) and new activities planned for 2027 (such as monitoring issues related to IP interconnection and further analysis of the regulation of access to physical infrastructures).
Following the early call for inputs, BEREC will publish a draft of the 2027 Work Programme, which is expected to be subject to a public consultation in October 2026. BEREC plans to adopt the final 2027 Work Programme in December 2026.
Interested parties wishing to participate in the early call for inputs can submit their contributions by 15 April 2026.
Authors:Massimo D'Andrea, Matilde Losa, Arianna Porretti
Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo Bardelli, Carolina Battistella, Noemi Canova, Gabriele Cattaneo, Maria Rita Cormaci, Camila Crisci, Cristina Criscuoli, Tamara D’Angeli, Chiara D’Onofrio, Federico Maria Di Vizio, Enila Elezi, Nadia Feola, Laura Gastaldi, Vincenzo Giuffré, Nicola Landolfi, Giacomo Lusardi, Josaphat Manzoni, Valentina Mazza, Lara Mastrangelo, Maria Chiara Meneghetti, Giulio Napolitano, Andrea Pantaleo, Deborah Paracchini, Maria Vittoria Pessina, Marianna Riedo, Rebecca Rossi, Roxana Smeria, Massimiliano Tiberio, Federico Toscani, Giulia Zappaterra.
Articles concerning Telecommunications are curated by Massimo D’Andrea, Flaminia Perna, Matilde Losa and Arianna Porretti.
For further information on the topics covered, please contact the partners Giulio Coraggio, Marco de Morpurgo, Gualtiero Dragotti, Alessandro Ferrari, Roberto Valenti, Elena Varese, Alessandro Boso Caretta, Ginevra Righini.
Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.
You can learn more about “Transfer”, the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and read Diritto Intelligente, a monthly magazine dedicated to AI, here.
If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.