1 December 2025

Innovation Law Insights

1 December 2025
Legal Break

Digital Omnibus package explained: What changes for companies under the new omnibus?

As part of our “Legal Break” video series, we discuss the potential impact of the draft Digital Omnibus package introduced by the European Commission. The package allows European companies to reduce administrative burdens and focus more on innovation and their own growth. Watch the episode here.

 

Data Protection and Cybersecurity

The Digital Omnibus package and its impact on the European data economy

The term Digital Omnibus refers to a package of measures aimed at simplifying and streamlining a significant part of the EU digital acquis. The initiative is part of a broader strategy to reduce administrative burdens. The objective is to strengthen the competitiveness of the Single Market and facilitate innovation, particularly in the areas of data, cybersecurity and AI.

The package has two main strands:

  • COM (2025) 837: a “horizontal” intervention, affecting a number of existing instruments (primarily the GDPR, ePrivacy, NIS2 and the Data Act);
  • COM (2025) 836: a “vertical” intervention on the AI Act, mainly aimed at making its application more gradual and flexible over time.

The architecture underpinning the Digital Omnibus therefore goes beyond mere technical adjustments and tends to profoundly reorganise the relationship between rules, economic actors and fundamental rights in the European digital space.

The measures described below are based on legislative proposals presented by the European Commission (COM(2025) 836 and COM(2025) 837), which are currently under examination by the European Parliament and the Council. The final text may therefore still be subject to significant changes prior to any eventual adoption.

The legal instruments concerned

The Digital Omnibus affects a significant set of legislative acts, including:

  • the GDPR (Regulation (EU) 2016/679);
  • the ePrivacy Directive (Directive 2002/58/EC);
  • the NIS2 Directive and other instruments relating to cybersecurity (DORA, CER, eIDAS);
  • the Data Act, which is strengthened through the incorporation of previous instruments such as the Data Governance Act and the Regulation on the Free Flow of Non-Personal Data.

The objective is to eliminate regulatory overlaps, reduce duplicated compliance obligations and provide greater legal certainty to economic operators, creating a more coherent and less fragmented framework.

The main changes in the field of privacy

GDPR

DEFINITION OF PERSONAL DATA

Current framework (Article 4(1) GDPR)

Article 4(1) GDPR defines personal data as any information relating to an identified or identifiable natural person. A person is considered identifiable if they can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or one or more factors specific to that person’s physical, physiological, genetic, mental, economic, cultural or social identity.

New framework proposed by the Digital Omnibus

  • The proposal clarifies that information doesn’t constitute personal data for a specific controller where that controller doesn’t have, and cannot reasonably obtain, the means to identify the data subject.
  • The assessment becomes more subjective and controller-centred, anchored in the concrete identification capabilities of the specific operator.

Implications

In contexts such as AI training, big data analytics and the sharing of industrial or IoT data, the scope of application of the GDPR could be narrowed, as it may be argued that the data at issue doesn’t qualify as personal data for certain actors.

SPECIAL CATEGORIES OF DATA

Current framework

Pursuant to Article 6(1)(f) GDPR, processing is lawful if it’s necessary for the purposes of the legitimate interests pursued by the controller or by a third party, provided that the interests aren’t overridden by the interests or fundamental rights and freedoms of the data subject. This legal basis has historically been interpreted restrictively in cases involving large-scale profiling or data-intensive emerging technologies.

New framework proposed by the Digital Omnibus

The proposal introduces a new specific exception allowing the processing of special categories of data where this it’s strictly necessary for detecting, preventing or correcting bias in AI systems, subject to the implementation of appropriate technical and organisational safeguards (including data minimisation, pseudonymisation, prohibition of reuse for other purposes and limited retention).

Implications

The introduction of an ad hoc legal basis for bias detection responds to a concrete need for auditing and non-discrimination in AI systems, while at the same time opening a sensitive space for processing highly protected data.

LEGITIMATE INTEREST FOR AI TRAINING

Current framework

Pursuant to Article 6(1)(f) GDPR, processing is lawful if it’s necessary for the legitimate interests pursued by the controller or by a third party, provided that the interests aren’t overridden by the interests or fundamental rights and freedoms of the data subject. This legal basis has historically been interpreted restrictively in cases involving large-scale profiling or data-intensive emerging technologies.

New framework proposed by the Digital Omnibus

The Digital Omnibus explicitly recognises that developing, training, testing and improving AI systems may constitute a legitimate interest of the controller, subject to a balancing test and the implementation of appropriate safeguards, and an effective right to object.

Implications

The explicit recognition of legitimate interest for AI training is a significant political and legal shift: an activity previously characterised by high uncertainty would now be “codified” as a potentially valid ground for processing under the GDPR.

ARTICLE 22 GDPR (AUTOMATED DECISION-MAKING)

Current framework

  • Article 22 gives the data subject the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them, subject to certain exceptions.
  • The concept of “solely automated decision” has generated significant interpretative uncertainty, in particular regarding the role and depth of human involvement.

New framework proposed by the Digital Omnibus

  • The proposal clarifies that a decision qualifies as “solely automated” only where human intervention isn’t capable of materially influencing the outcome of the decision-making process.
  • A criterion of “meaningful human involvement” has been introduced, aimed at distinguishing between purely formal controls and genuine human assessment.

Implications

This clarification may reduce the risk that hybrid decision-making models (human + algorithm) automatically fall within the scope of Article 22, potentially facilitating the deployment of automated decision-making systems in high-volume contexts.

DATA BREACH NOTIFICATIONS

Current framework

  • In the event of a personal data breach, Article 33 GDPR requires notification to the supervisory authority within 72 hours, unless the breach is unlikely to result in a risk to the rights and freedoms of natural persons.
  • Communication to data subjects is required where the breach is likely to result in a high risk.

New framework proposed by the Digital Omnibus

  • Notification to the supervisory authority becomes mandatory only in cases of high risk, aligning the threshold with that applicable to communication to data subjects.
  • The deadline is extended to 96 hours and the notification must be submitted through a Single EU Entry Point (see Cybersecurity section).

Implications

This measure would significantly reduce the volume of notifications and the administrative burden for both organisations and supervisory authorities.

ePrivacy Directive

With regard to the ePrivacy Directive, the Digital Omnibus proposes to move beyond the traditional distinction between the sector-specific rules on electronic communications and the GDPR as far as the use of cookies and similar technologies is concerned.

The provisions in Article 5(3) of Directive 2002/58/EC would largely be absorbed into the GDPR through the introduction of specific rules on processing data generated via terminal equipment (cookies, SDKs, fingerprinting, device identifiers).

The objective is to bring processing within a single legal framework, based on the general principles of the GDPR, reducing regulatory fragmentation and the overlap of formal obligations. The proposal further promotes the use of machine-readable consent and objection signals integrated into browsers and operating systems, with the aim of rationalising consent-collection mechanisms and reducing the proliferation of banners, while safeguarding the confidentiality of electronic communications.

Implications in the field of cybersecurity

The Digital Omnibus doesn’t alter the substantive principles governing the protection of networks and information systems. Instead, it seeks to address one of the main operational concerns raised by both operators and authorities: the fragmentation and overlap of incident notification regimes.

The proposal provides for:

  • the establishment of a Single EU Entry Point, expected to be managed by ENISA, through which obligated entities would be able to submit a single notification valid for multiple regulatory frameworks;
  • the integration of reporting channels provided for under:
    • the NIS2 Directive
    • the DORA Regulation
    • the GDPR (in the event of a personal data breach)
    • the CER Directive on critical infrastructures
    • the eIDAS Regulation, according to the “report once, share many” principle
  • the standardisation of the content and formats of notifications, with the aim of reducing legal uncertainty and facilitating risk assessment by the competent authorities;
  • strengthened cooperation between national and European authorities, through more fluid and interoperable information-sharing mechanisms.

From a practical perspective, this development is particularly relevant for:

  • large multinational groups subject to multiple sectorial obligations;
  • operators of essential services and digital service providers;
  • banks, insurance companies and other financial operators subject to DORA.

If adopted, the measure would result in a significant procedural simplification, while at the same time requiring organisations to revise their internal incident-response processes.

Implications for the Data Act

With regard to the Data Act, the Digital Omnibus strengthens its role as the cornerstone of the European data architecture. The proposal envisages the absorption into the Data Act of the Data Governance Act (Regulation (EU) 2022/868), Regulation (EU) 2018/1807 on the free flow of non-personal data, and the Open Data and Re-use of Public Sector Information Directive (EU) 2019/1024, effectively transforming it into a single, comprehensive reference framework for access to, sharing and re-use of both personal and non-personal data.

Targeted adjustments have been introduced to make cloud switching obligations more proportionate, in particular for SMEs and highly customised services, and to reinforce safeguards against unauthorised access by third countries.

Conclusion

The Digital Omnibus is the first, strong signal of a reorientation of EU digital policy towards more simplified and competitiveness-driven models. Although it doesn’t yet have legally binding effects, organisations should be closely monitoring its evolution.

Author: Enila Elezi

 

Artificial Intelligence

Data provenance and defence technology: Information governance lessons from Slush 2025

The 2025 edition of Slush, held on 19-20 November in Helsinki, was more than a showcase for innovation: it served as a real barometer of the global technology landscape and a turning point for professionals working in information governance, cybersecurity and eDiscovery.

More than 13,000 founders, investors and operators – collectively responsible for trillions of dollars in assets – gathered at the Messukeskus Convention Centre. And it’s clear that the technological paradigm had shifted. The once-distinct separation between the aggressive growth trajectory of startups and the more cautious, compliance-driven approach to risk management has dissolved. A new and complex risk environment has evolved, where the pace of innovation directly challenges established foundations of data security and governance.

The most discussed theme, both on the official stages and in the more discreet meetings among investors, was the full maturity of generative AI. The visionary enthusiasm of previous years has given way to a colder pragmatism focused on practical implementation and trust. The key question is no longer whether to adopt AI, but how to integrate it reliably into highly regulated environments, at a time when consumer distrust of automated content continues to grow and companies still struggle to transform internal pilot projects into meaningful productivity gains. The burden on information governance professionals is increasing considerably: they have to ensure the provenance, traceability and integrity of data generated by systems that can, by their nature, produce opaque or difficult-to-verify outputs.

For legal and compliance teams, this creates an immediate operational need: embedding audit-trail mechanisms into corporate systems that clearly document which portions of a text or workflow were produced by an AI agent and which were validated or modified by a human. Failing to establish this architecture now means exposing the company to severe complications in future eDiscovery matters – especially when, not if, the first litigation involving algorithm-generated content arrives.

Security and hybrid conflict: The velocity trap of the digital ecosystem

While AI maturity captured much of the attention, the geopolitical backdrop was equally significant. The current global context has made it natural for Slush 2025 to place particular focus on defence technology and dual-use solutions. The priorities are clear: security-by-design isn’t an aspirational goal but a functional prerequisite.

The strength of the deep-tech startup landscape demonstrated that data protection can no longer rely solely on perimeter-based defences. The emphasis is shifting toward information resilience and the safeguarding of critical infrastructure. For eDiscovery professionals, the growing intersection between cyberattacks, infrastructure disruptions, and disinformation campaigns means that incident-response plans must evolve substantially. A cyberattack can no longer be viewed simply as a data breach; organisations have to prepare for scenarios in which operational information may be compromised, manipulated or weaponized as part of a hybrid campaign.

A practical consequence of this shift is the need for structured collaboration between eDiscovery teams and Security Operations Centres. Integrating threat intelligence and live attack indicators into data-preservation workflows can be decisive in enabling rapid, forensically sound recovery of information required for legal or regulatory processes.

Regulation as a design constraint

The broader European ecosystem introduced additional layers of complexity, largely due to the imminent full applicability of Regulation (EU) 2024/1689 (AI Act). Among investors and founders, a sharp divide emerged: some view the regulatory burden as excessive and detrimental to Europe’s competitiveness relative to the US and China, while others see regulation as a strategic opportunity to build globally exportable “trustworthy AI.” Regardless of the perspective, one truth remains: compliance cannot be treated as an add-on.

Startups developing high-risk AI systems have to consider the AI Act’s requirements – data quality, transparency and human oversight – as integral product features. This requires preparing detailed documentation on the provenance of training data, data-cleaning methodologies, bias-mitigation techniques and human-validation cycles. Only such rigor enables organisations to respond to regulatory inspections or judicial discovery.

For eDiscovery professionals, this means immediately updating data maps to include AI-model registries, training and validation datasets, training parameters and internal governance logs. These represent new categories of high-risk ESI that will inevitably play a central role in future disputes.

The human factor and the startup challenge

Beyond the major stage announcements, Slush also revealed a more human narrative characterized by operational constraints, investor selectivity and the need for new skill sets. A 2025 survey confirmed that most European founders consider fundraising and revenue growth their primary challenges, in a market where capital is becoming increasingly selective. Those who succeed do so by combining technical expertise with the ability to navigate increasingly complex legal and security considerations.

The belief that AI will replace knowledge-based professions is giving way to the understanding that AI will require highly skilled professionals. For those in information governance and eDiscovery, this means rethinking their role: not fearing the automation of repetitive tasks, but instead developing the expertise needed to manage, validate and contextualize intelligent systems. Auditing a model, assessing the quality of its outputs, and defining the parameters for ethical and defensible use are becoming strategic functions for the professionals of the future. Slush 2025 showcased an ecosystem evolving at remarkable speed, driven by bold innovation yet constrained by increasingly stringent demands for security, reliability and transparency.

Author: Dorina Simaku

 

Intellectual Property

UPC as a ‘common court’ among member states and in compliance with EU law: Court of Appeal decision

In its decision of 6 October, the Court of Appeal ruled on the appeal lodged by a major streaming company against the measures issued by the Munich Local Division in March last year, which had rejected the company’s preliminary objections.

The dispute originates from three infringement actions brought at first instance by two patent holders operating in the audiovisual sector. The defendant company raised several preliminary objections under Rule 19.1 of the Rules of Procedure, challenging the Munich Division’s territorial jurisdiction, the compatibility of the UPCA with EU law and the alleged breach of the right to be tried by a court established by law. The Court of First Instance declared admissible – though unfounded – only the objection concerning jurisdiction, holding that the grounds set out in Rule 19.1 were mandatory and precluded examination of preliminary objections not expressly provided therein. Before the Court of Appeal, the appellant contested the first-instance orders on this point, requesting their annulment or, alternatively, a preliminary referral to the Court of Justice so it could clarify the role of the UPC within the framework of EU law.

The first ground of appeal concerned the alleged incompatibility between the UPCA (Articles 31 and 32) and EU law (Articles 19 TUE and 267 TFEU), an issue that the Court of Appeal regarded as an essential prerequisite for the exercise of the UPC’s functions, allowing examination of the objection on its merits.

According to the appellant, the international jurisdiction of the UPC – based on Articles 31 UPCA and Articles 71a and 71b of the Brussels bis Regulation – raised doubts as to its conformity with EU law.

Article 71a defines the UPC as a “common court” among member states and, as such, the Unified Court is treated under the Regulation as a national court. This classification is crucial, as it preserves the balance established by Articles 19 TUE and 267 TFEU between the Court of Justice – the guardian of the Treaties – and the national courts, which may refer to it for a preliminary referral.

According to the appellant, treating the UPC as a national court would be merely formal and wouldn’t overcome the limits imposed by EU law. Since the UPC was established by an international treaty – the UPCA – it would not constitute a body of a member state and would therefore undermine the institutional framework designed to ensure the uniform application of EU law.

Echoing the case law of the Court of Justice, the Court of Appeal nonetheless reaffirmed that the UPC is, for all intents and purposes, a “common court” of several member states within the meaning of Article 71a of the Brussels bis Regulation. As such, the Unified Court is entitled to make preliminary referrals insofar as it maintains an effective connection with national legal systems and ensures cooperation with the Court of Justice. Finding that these conditions were fully satisfied in the present case, the Court of Appeal excluded the need for a preliminary referral as requested by the appellant.

The second ground of appeal concerned the alleged violation of the right to be tried by a court established by law, which had already been raised as a preliminary objection before the Munich Division. The Court of Appeal, confirming the interpretation of the first-instance judges, declared the objection inadmissible and unfounded. According to the appellant, the complaint was based on the alleged lack of jurisdiction of the UPC under EU law and on the replacement of the Central Division originally planned in London (under the UPCA) with the one in Milan. Having reaffirmed the UPC’s jurisdiction and legitimacy under EU law for the reasons already set out, the Court of Appeal also rejected the alleged breach of Article 47 of the Charter of Fundamental Rights and Article 6 of the ECHR, clarifying that the right to a tribunal established by law concerns the independence and impartiality of the judge, not their geographical location.

With this order, the Court of Appeal contributed to clarifying the nature of the UPC as a common court fully integrated into the EU legal order and consistent with the principles enshrined in the Treaties, reinforcing the stability of the European patent litigation system.

Author: Laura Gastaldi

Simple photographs: Simplification Bill extends exclusive rights to 70 years

On 20 November, the discussion of Bill C. 2655 began at the Chamber of Deputies. The bill, linked to the public finance manoeuvre, aims to simplify and digitize administrative procedures. Among the most notable innovations is Article 47, which addresses the protection regime for “simple” photographs.

What does the reform provide?

The proposal, already approved by the Senate, amends Article 92 of the Copyright Law (Law No. 633/1941), extending the duration of exclusive rights on photographs that don’t meet the creativity threshold to be considered “photographic works” from 20 to 70 years. This category includes “images of people or of aspects, elements, or facts of natural and social life, obtained through photographic processes or similar processes, including reproductions of works of visual art and frames from cinematographic films” (Art. 87, Law No. 633/1941).

Why is it an important change?

Until now, these photographs were protected for only 20 years from creation, which often discouraged investment in archives and collections of non-creative images, widely used in publishing, advertising and digital platforms. Extending protection to 70 years provides greater legal certainty and economic value for photographers, agencies and businesses operating in the sector.

The reform also addresses a historical disparity between “simple photographs” and “photographic works”: the latter, considered works of authorship, are already protected for 70 years after the author’s death and benefit from moral rights. Simple photographs, however, are still excluded from moral rights, but see their economic protection strengthened.

The regulatory and digital context

The intervention is part of a broader package of measures for simplification and digitalization. Updating copyright rules recognises the evolution of image production and circulation in an era dominated by social media, e-commerce and online content.

For professionals and companies operating in photography, media and advertising, the amendment creates longer-term monetisation opportunities and stronger protection against unauthorised use. However, questions remain regarding enforcement and coordination with other jurisdictions, especially in Europe, where regulations may differ.

Author: Noemi Canova


Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo BardelliCarolina BattistellaNoemi CanovaGabriele Cattaneo, Giovanni Chieco, Maria Rita CormaciCamila CrisciCristina CriscuoliTamara D’AngeliChiara D’OnofrioFederico Maria Di Vizio, Enila EleziNadia FeolaLaura GastaldiVincenzo GiuffréNicola LandolfiGiacomo LusardiValentina MazzaLara MastrangeloMaria Chiara MeneghettiGiulio Napolitano, Andrea Pantaleo, Deborah ParacchiniMaria Vittoria PessinaMarianna Riedo, Tommaso Ricci, Marianna RiedoRebecca Rossi, Dorina Simaku, Roxana SmeriaMassimiliano TiberioFederico Toscani, Giulia Zappaterra.

Articles concerning Telecommunications are curated by Massimo D’AndreaFlaminia Perna, Matilde Losa and Arianna Porretti.

For further information on the topics covered, please contact the partners Giulio CoraggioMarco de MorpurgoGualtiero DragottiAlessandro FerrariRoberto ValentiElena VareseAlessandro Boso CarettaGinevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer,” the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out Diritto Intelligente, a monthly magazine dedicated to AI, here.

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.

Print