26 June 202527 minute read

Innovation Law Insights

26 June 2025
Artificial Intelligence

AI Act Compliance deadline approaching: Are you ready?

In less than 50 days, relevant provisions of the EU AI Act will become applicable, making it even more important for businesses to ensure they comply when using AI systems.

Starting on 2 August 2025, a critical group of obligations under the AI Act will become legally binding. These include:

  • the designation of notifying and notified authorities for high-risk AI systems
  • requirements for General Purpose AI (GPAI) providers
  • foundational AI governance rules
  • penalties (with the exception of fines for GPAI model providers)
  • confidentiality obligations in the post-market monitoring context

These measures impact not only AI developers and providers, but also deployers, meaning any company that integrates or uses AI systems – especially generative AI – in their operations. With these obligations fast approaching, organisations must urgently ensure that their AI Act compliance strategy is fully implemented.

Why the AI Act changes everything

The AI Act represents a shift from soft law to enforceable obligations. It introduces a clear distinction between prohibited, high-risk, and general-purpose AI systems, each carrying specific duties. This structured approach requires companies to assess not only the AI tools they build, but also those they purchase, license or simply integrate.

Regulators will now also have the ability to audit companies’ AI practices, impose corrective measures, and initiate investigations into systems that pose significant risk to fundamental rights or safety.

A structured approach to AI governance

To meet the demands of AI Act compliance, businesses need a robust governance model. It must align legal, technical and operational stakeholders around a unified AI strategy.

  1. Strategic oversight from the top

Governance begins with leadership. Executive teams should define how the organisation uses AI, with guiding principles like trust, transparency and risk minimization. These principles must then be translated into detailed internal policies and protocols by legal, risk and compliance departments.

This top-down approach ensures that AI is aligned with the company’s values while remaining adaptable to ongoing regulatory change.

  1. Internal governance committees

Rather than relying on a single AI officer, companies are increasingly appointing multidisciplinary AI committees. These typically include representatives from legal, compliance, cybersecurity, data governance and IT.

Their role is to evaluate risk, oversee internal use cases, and manage relationships with external AI vendors. These committees serve as the operational heart of AI Act compliance, making sure each deployment meets both legal and ethical standards.

  1. Mapping and classifying AI use cases

A surprising number of tools in everyday use – from automated HR platforms to creditworthiness scoring systems – qualify as AI under the Act’s broad definition. Many of these may fall under the “high-risk” category without the organisation realizing it.

The first step toward compliance is identifying every AI system in use and classifying it correctly. Failing to do so could result in deploying systems that trigger obligations the company is unaware of – particularly problematic in regulated industries or cross-border operations.

  1. AI Act compliance policies

To avoid regulatory risks, every organisation must develop an internal policy for AI, verify that it complies with the Act, and ensure that all internal and external processes align with their AI Act compliance strategy. If a business doesn’t operationalise AI Act compliance across departments, it risks failing AI Act compliance even when it believes it fully complies with the Act.

  1. Risk management and controls

Once AI systems are mapped, they must be paired with controls that mitigate their associated risks. This includes:

  • human oversight
  • explainability mechanisms
  • bias detection
  • resilience and security protocols
  • vendor accountability through contracts

For high-risk AI systems, these controls are no longer optional, they’re legal requirements under the AI Act.

  1. Continuous monitoring

AI isn’t static, and neither are the risks it introduces. Companies need to maintain a dynamic governance model, which includes:

  • revalidating systems when software or data changes;
  • tracking updates in the legal framework; and
  • revisiting risk assessments periodically.

This process must be documented and auditable to meet the post-market monitoring and accountability expectations of regulators.

The strategic value of compliance

Organisations that treat AI Act compliance as a check-box exercise risk falling behind. Those that build governance into their operations can gain long-term advantages – reducing exposure, improving internal efficiency, and increasing stakeholder trust.

With the EU AI Act expected to set the standard for AI regulation globally, companies that achieve compliance now will be better equipped to navigate upcoming frameworks in other jurisdictions.

With the countdown underway, the question isn’t whether the EU AI Act applies to your business – it likely does – but whether your business is prepared to meet its obligations.

Read more on this topic in the latest issues of our AI law journal available here.

Author: Giulio Coraggio

 

Data Protection and Cybersecurity

Protecting children in the digital era: Key EU and Italian legal developments on online safety for minors

The increasing presence of minors in digital spaces poses significant challenges, exposing children to harmful content, privacy breaches, and manipulative design practices. Protecting minors online requires a balanced approach that ensures their safety, privacy and well-being by conducting risk assessments on online platforms, implementing age verification mechanisms, and default privacy settings tailored to minors.

The European Commission has taken a proactive step by proposing the draft guidelines on protection of minors online under the Digital Services Act, pursuant to Article 28 of Regulation (EU) 2022/2065 (Digital Services Act or DSA). The guidelines aim to support online platforms accessible to minors in fulfilling their obligation to guarantee a high level of privacy, safety, and security. The guidelines, which were open for public consultation until 15 June 2025 and are expected to be finalized by summer 2025, will complement the existing measures implemented at the EU and national level.

The current EU legal framework

At the EU level, significant efforts to protect minors online have been undertaken through the regulation on data protection. The Regulation (EU) 2016/679 (General Data Protection Regulation or GDPR) contains child-specific provisions recognising children as vulnerable individuals who require enhanced protection, particularly regarding the lawful processing of their personal data. Article 8 GDPR sets the age threshold for valid consent to data processing in information society services at 16 by default, allowing member states to lower it to no less than 13 years. Consent for children below this age must be obtained from a parent or guardian, with organisations expected to implement appropriate parental consent verification mechanisms. During its February 2025 plenary meeting, the European Data Protection Board (EDPB) adopted a statement on age assurance, noting that the measure is essential to ensure that children don’t access content that isn’t appropriate for their age.  At the same time, the EDPB acknowledged that the method to verify age must be the least intrusive possible and children’s personal data must be protected.

To complete this framework with a focus on online services, the DSA introduces specific provisions aimed at protecting minors online, but – unlike the GDPR – it doesn’t indicate a specific age threshold for protection. Article 28 of the DSA explicitly states that “providers of online platforms accessible to minors shall put in place appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors, on their service.”

Article 28 of DSA doesn’t prescribe specific measures for age verification. But the last paragraph of Article 28 provides that the Commission can issue guidelines to help providers of online platforms in applying these measures. In this context, the Commission opened a public consultation on the guidelines, which closed on 15 June.

As part of the consultation, on 4 June 2025, the European Commission held a workshop with 150 experts from industry, academia, civil society, regulators, and young people to gather feedback on the guidelines.

The final version of the guidelines is expected to be adopted in summer 2025. In the meantime, the EU Commission is currently working on developing an EU harmonized privacy-preserving age verification solution. The first version of the technical specifications and the beta version are already available on GitHub. This system is intended to provide a temporary solution pending the entry into force of the EU Digital Identity Wallet, which is expected by the end of 2026, in accordance with Regulation (EU) 2024/1183.

The content of the DSA guidelines

The guidelines aim to support providers of online platforms in addressing risks for minors by providing a set of measures that the Commission considers will help providers to ensure a high level of privacy, safety and security on their platforms.

The EU Commission notes that a provider of an online platform that simply declares in its terms and conditions that it isn’t accessible to minors but doesn’t put any effective measure in place to avoid minors accessing its service, cannot claim that its online platform falls outside the scope of Article 28 of the DSA. So it still has to implement effective measures to protect minors from harmful content.

At the same time, according to the current draft version of the guidelines, the EU Commission considers that self-declaration (ie “relying on the individual to supply their age or confirm their age range, either by voluntarily providing their date of birth or age, or by declaring themselves to be above a certain age, typically by clicking on a button online”) doesn’t meet the requirements for robustness and accuracy. So the Commission doesn’t consider self-declaration to be an appropriate age assurance method to ensure a high level of privacy, safety, and security of minors in accordance with Article 28.

Appropriate measures recommended by the Commission include age estimation (meaning “independent methods which allow a provider to establish that a user is likely to be of a certain age, to fall within a certain age range, or to be over or under a certain age”) and age verification (meaning “a system that relies on physical identifiers or verified sources of identification that provide a high degree of certainty in determining the age of a user”).

With regards to content moderation, the Commission considers that providers of online platforms accessible to minors should establish moderation policies and procedures that set out how content and behaviour that’s harmful for the privacy, safety and security of minors is detected and how it will be moderated and ensure that these policies and/or procedures are enforced in practice.

Lastly, the Commission calls for the implementation of child-friendly reporting, feedback and complaints mechanisms that allow minors to report content, activities, individuals, accounts, or groups they believe may violate the platform’s terms and conditions. This includes any content, user or activity that is considered by the platform to be harmful to minors’ privacy, safety, and/or security.

The Italian framework

The Italian legal framework for the online protection of minors includes a combination of privacy safeguards, sector initiatives, and recent legislative measures aimed at enhancing digital safety for children. Under the Italian Privacy Code, as adapted to the GDPR by Legislative Decree No. 101/2018, minors are recognised as a vulnerable category requiring special protection in the processing of their personal data.

For instance, Article 2-quinquies of the Privacy Code explicitly addresses the rights of minors, reinforcing the GDPR’s provisions on parental consent for data processing of children under 14 years and emphasizing transparency and fairness in communications directed at minors. The Italian Data Protection Authority (Garante per la Protezione dei Dati Personali) has adopted a firm stance on safeguarding minors’ privacy, notably by requiring several providers of online platforms and services to adopt age verification tools suitable for preventing access by minors without the express consent of those exercising parental responsibility over them.

Legislative Decree No. 123 of 15 September 2023 (Decreto Caivano) establishes urgent measures to enhance minors’ protection in the digital sphere. Among its key provisions, Article 13-bis mandates that website operators and providers of video sharing platforms that disseminate pornographic images and videos in Italy have to verify users’ age to prevent minors under the age of 18 from accessing pornographic content.

To implement Article 13-bis of Decreto Caivano, on 18 April 2025, the Italian Communications Regulatory Authority (AGCOM) approved Decision 96/25/CONS, which sets out the procedures that video sharing platforms and websites that make pornographic content available in Italy must use to verify the age of their users.

The age verification system involves the intervention of certified independent third parties to provide proof of age, defining a two-step process – user identification and authentication – for each session of use of the regulated service (eg the provision of pornographic content via a website or web platform).

Within six months of the publication of the resolution, platforms and websites must comply with the relevant provisions. AGCOM is empowered to enforce compliance, ordering providers to cease violations and take all necessary measures to block the website or platform found in violation of the requirements.

Conclusion

The protection of minors online is a priority both at the EU and Italian levels, grounded in principles of privacy-by-design, risk-based safeguards, and safeguarding the best interests of the child.

The EU has established a robust regulatory framework through the GDPR’s child-specific provisions, the Digital Services Act’s targeted obligations for online platforms, and forthcoming Commission guidelines. Italy complements the EU framework with recent legislative and regulatory interventions such as the Decreto Caivano and AGCOM’s enforceable age verification requirements. Compliance with these evolving rules is essential for providers of online services to avoid significant penalties and reputational damage, especially given the increasing scrutiny on digital platforms regarding minors’ safety.

Our team at DLA Piper has extensive experience in providing tailored advice and practical support to online service providers seeking to navigate this complex landscape and ensure full compliance with applicable obligations.

Author: Marianna Riedo

 

Technology

NFTs and the Insurance sector: Emerging risks and legal solutions

In the context of the growing digitalisation of the economy and culture, digital artworks issued in the form of Non-Fungible Tokens (NFTs) raise novel and complex issues for insurance law.

The absence of a defined regulatory framework, the legal uncertainty surrounding the nature of tokens, and the risks related to market volatility and cybersecurity all call for an urgent adaptation of traditional protection instruments – particularly in light of the transformations introduced by blockchain technology.

The legal nature of NFTs and definitional challenges

NFTs function as digital “certificates of ownership” over digital works, registered on a blockchain via smart contracts. They serve as certificates of authenticity and title for native digital artworks, conferring uniqueness and traceability. But despite the widespread proliferation of crypto art and its impact on the current economic and legal landscape, there’s still no uniform definition of the legal nature of NFTs – an essential element for determining insurance risks and premiums.

A preliminary distinction must be drawn between:

  • the token itself, understood as a unique, non-fungible digital entity; and
  • the digital artwork identified by the token, which may also be subject to insurance coverage.

The Italian Ministry of Culture has attempted to frame NFTs as instruments for the circulation of digital copies of artistic works, without, however, providing them with a definitive legal qualification. At the European level, Regulation (EU) 2023/1114 on Markets in Crypto-Assets (MiCA) expressly excludes NFTs from its scope of application. Similarly, neither French nor US law offers comprehensive legal definitions.

Noteworthy, however, is the English High Court’s ruling in Osbourne v OpenSea (2022 EWHC 1021), which recognised NFTs as proprietary assets entitled to legal protection. Likewise, the Hangzhou Internet Court has classified NFTs as virtual property.

The challenges of insurability of digital art

The transition of art into the digital realm requires a fundamental rethinking of the concept of insurability. Traditional fine art policies are ill-suited to cover the new and distinctive risks associated with NFTs. The most pressing challenges include:

  • the valuation of NFTs, which is subject to extreme fluctuations and the absence of a stable reference market;
  • the identification of the insurable risk, often intangible or technological in nature and not easily classifiable within conventional frameworks;
  • difficulties in verifying token ownership, often obscured by pseudonymisation and decentralised technologies.

Additionally, there are no standardised methods for assessing the value and risk associated with these assets, making the calculation of insurance premiums a highly complex and variable exercise.

Emerging risks: Between market volatility and cyber threats

NFT-based art is exposed to a wide range of novel risks, including:

  • the sudden depreciation of tokens;
  • the potential for plagiarism and counterfeiting of the underlying digital artwork;
  • the illicit use of NFTs for money laundering and fraud, exacerbated by the anonymity of transactions and their cross-border nature; and
  • cyberattacks targeting digital wallets and trading platforms.

In particular, the theft of cryptographic private keys to a wallet results in the irretrievable loss of the asset. This risk is significantly higher with so-called hot wallets, which are constantly connected to the internet, compared to cold wallets. A notable case is that of Coincheck Inc., a major Japanese cryptocurrency exchange that suffered a cyberattack in which USD534 million in crypto assets were stolen – surpassing even the infamous Mt. Gox breach of 2014. The theft occurred through a hot wallet connected to the network.

The cyber risk insurance perspective

Due to the inherently digital and decentralised nature of NFT artworks, cyber risk insurance policies appear to offer a more suitable form of coverage than traditional art insurance. These policies typically cover damages arising from data breaches, cyberattacks, liability, and legal expenses. Though originally designed for other purposes, cyber insurance can be adapted to cover:

  • damage to digital infrastructure or the blockchain;
  • theft of tokens or private keys;
  • intellectual property rights infringements;
  • reputational damage from the circulation of counterfeit NFTs; and
  • technical assistance and disaster recovery expenses.

But there are still conceptual challenges regarding whether a work of art can be qualified as “data” protected under a cyber risk framework rather than merely as an insurable asset.

Operational experiences and emerging coverage models

Some insurers and insurtech firms have already developed innovative solutions tailored to the protection of NFTs. Notable examples include Coincover, which offers token recovery and private key theft protection; Nexus Mutual, which provides mutualised coverage against cyberattacks on smart contracts; and YAS Digital Limited, which launched NFTY, the world’s first NFT microinsurance, developed in collaboration with Assicurazioni Generali S.p.A. – Hong Kong.

Other insurers, such as OneDegree, AXA, and MunichRe, are exploring hybrid policies that combine fine art and cyber risk coverage. Additionally, platforms specialising in blockchain-based art certification, such as 4ART Technologies AG and Art Rights, are offering tracking and identification services with potential applications in the insurance context.

Conclusion

The rise of crypto art as both a form of artistic expression and an investment asset calls for a multidisciplinary legal and insurance response. While the volatility of the NFT market and the recent downturn in sales raise doubts about the long-term sustainability of the phenomenon, the digitisation of cultural and artistic heritage is undeniably on the rise. This trend demands the development of regulatory and insurance instruments that are flexible, technologically advanced, and firmly rooted in legal principles.

Author: Dorina Simaku

 

Intellectual Property

Generative AI and Copyright: The first preliminary reference to the CJEU

In the evolving landscape of AI and copyright law, the Court of Justice of the European Union (CJEU) has received its first-ever preliminary reference addressing the copyright related legal implications of generative AI. The case, Like Company v Google Ireland (C-250/25), originates from Hungary and raises significant questions about the legal classification of AI-generated responses under EU copyright rules.

Background of the copyright related case on generative AI before the CJEU

The dispute stems from an article published on the Hungarian website balatonkornyeke.hu, run by Like Company, reporting on Hungarian singer Kozso and his alleged plan to release dolphins into Lake Balaton, alongside other personal details. The 579-word article was unsigned and compiled information already available from various online sources and newspapers, including a photo taken from Facebook.

According to Like Company, Google’s Gemini chatbot was prompted as follows:
“Can you provide a Hungarian-language summary of the article published on balatonkornyeke.hu regarding Kozso’s plan to release dolphins into the lake?”

Like Company claims the chatbot generated a detailed summary of the original content, infringing its copyright and amounting to an unauthorized “communication to the public” of its press publication. The publisher invoked Article 15 of the Digital Single Market (DSM) Directive, which grants press publishers exclusive rights over the digital use of their publications.

Google denied any infringement, arguing that:

  • no reproduction of the content had taken place in Hungary, and Hungarian copyright law didn’t apply; and
  • the summary didn’t constitute reproduction nor public communication under EU law, since it didn’t reach a “new public” – the original article was already freely available online – and the summary merely presented factual information, not verbatim content.

Google further argued that, even if the chatbot's output were deemed a reproduction, it would fall under copyright exceptions provided by EU law, including:

  • the temporary copies exception under Article 5(1) of the InfoSoc Directive; and
  • the text and data mining (TDM) exception under Article 4 of the DSM Directive, which permits automated extraction of lawfully accessible content unless rights holders have expressly opted out.

The preliminary reference of the AI case to the CJEU

The Budapest Környéki Törvényszék (Budapest District Court) referred the following questions to the CJEU:

  • Can a chatbot’s response reproducing identical content from a protected press publication be considered: a reproduction; a communication to the public – and is it relevant that the chatbot’s response results from a predictive process based on previously observed patterns?
  • Does training an AI model trigger the reproduction right?
  • If so, does the TDM exception under Article 4 of the DSM Directive apply?

These questions strike at the core of some of the most debated issues in the intersection of copyright and AI.

Key considerations

Two preliminary remarks are necessary:

First, the way the Hungarian court framed the reference implies that the content on the publisher’s website is already deemed protected by copyright. If that isn’t the case, it must be recalled that neither copyright nor related press publisher rights extend to “mere facts” – a particularly thorny issue in the context of journalistic content.

Second, both copyright and related rights allow for independent creation. Infringement requires showing derivation. In the context of large language models (LLMs), this is not always straightforward, as the training data is often opaque. This lack of transparency is precisely why rights holders are pushing for stricter transparency obligations for AI developers.

If the content is indeed protected and derivation is proven, then under EU law, a chatbot’s activity may amount to a violation of reproduction and public communication rights. The CJEU has consistently emphasized a high level of protection for intellectual property, as enshrined in Article 17(2) of the EU Charter of Fundamental Rights and reflected in both the InfoSoc and DSM Directives.

As to whether the “predictive nature” of AI outputs might shield against infringement, the answer seems negative if the prediction results in the reproduction of protected third-party content. That said, not all matches between output and protected works are necessarily relevant – context matters.

AI training and the reproduction right

The second issue concerns whether AI training constitutes reproduction. Within the EU legal framework, the answer appears settled: yes.

EU law provides specific exceptions authorising, under strict conditions, text and data mining, even for content protected by copyright or related rights, including database rights. The CJEU has also consistently interpreted the reproduction right broadly, in line with Article 2 of the InfoSoc Directive.

Denying that TDM (a component of AI training) triggers the reproduction right would undermine the rationale behind the TDM exceptions provided by EU legislators.

Does the TDM exception apply to AI training?

Finally, the question remains whether the TDM exception applies to AI model training. While some commentators initially expressed doubts, the issue now seems largely resolved. AI was already under consideration when the DSM Directive was adopted in 2019, and the new AI Act explicitly links TDM to AI training.

But it's worth emphasizing that TDM is only one part of the training process. There is no explicit EU exception covering AI training as such. The current framework only allows exceptions for specific acts (like reproduction) for narrowly defined purposes and under strict conditions, including:

  • lawful access to the content; and
  • compliance with the “three-step test” (special case, no conflict with normal exploitation, and no unreasonable prejudice to the rightsholder’s interests).

Final remarks

While a decision from the CJEU isn’t expected before late 2026 or early 2027, this first preliminary reference marks a pivotal moment. For the first time, the EU’s top court will weigh in on how copyright law applies to generative AI – a turning point in the legal shaping of AI innovation in Europe.

Author: Maria Vittoria Pessina

 

Legal Tech

Legal tech market evolution: From hype to strategic implementation

The legal technology landscape has reached an inflection point. While the market continues its explosive growth trajectory, the conversation among legal professionals has shifted from “whether to adopt AI” to “how to implement it strategically.” Recent market data reveals both the enormous potential and the critical challenges facing legal tech adoption in 2025.

Market dynamics: Growth amid saturation

The numbers from several public and confidential market reports we assessed tell a compelling story of both opportunity and challenge. The global legal services market, valued at approximately USD1.05 trillion in 2024, continues steady growth at a CAGR of 4.5-4.6%. However, the legaltech market, despite reaching USD26.7 billion in 2024 and growing at an impressive 12.8% CAGR, represents only 2.54% of the total legal market. Growing steadily.

More striking is the disconnect between investment and market penetration. With almost 9,500 legaltech companies globally having raised USD16.6 billion in total funding, yet only 14 achieving unicorn status, the success rate stands at merely 0.15%. This saturation suggests that despite massive capital deployment, most solutions aren't solving genuine client problems or achieving meaningful adoption.

The geographic innovation gap

The funding distribution reveals another critical insight: approximately 70% of legaltech investment is concentrated in the US, with Anglo-centric markets (US, Canada, UK) capturing roughly 80% of total funding. This geographic concentration creates significant opportunities in underserved markets, where local legal systems, cultural expectations, and regulatory frameworks demand tailored solutions rather than one-size-fits-all approaches.

AI adoption reality check

According to the 2025 Thomson Reuters Generative AI in Professional Services Report, 41% of legal professionals now use publicly available AI tools, with an additional 17% using industry-specific AI solutions. Organisation-wide AI usage has nearly doubled to 22% in 2025, compared to 12% in 2024. Remarkably, 95% of legal professionals believe AI will be central to their organisation's workflow within five years, despite only 13% reporting it as central today.

However, a critical gap emerges in measurement: only 20% of organisations currently measure ROI from their AI investments, while 59% aren’t measuring ROI at all. This suggests widespread experimentation without systematic evaluation of outcomes, which can lead to waste of time and budget (and ultimately board's trust).

Process-first implementation: Lessons from the field

During May I was a speaker at the Future Lawyer Europe conference 2025 where I had the opportunity to engage with legal tech leaders from international law firms, national practices, and in-house legal teams. Despite varied backgrounds and jurisdictions, a consensus emerged: while enthusiasm for AI remains high and most organisations have piloted or integrated AI solutions, the key to successful implementation lies in focusing on process pain points rather than technology features.

The challenge isn't identifying promising AI tools – it's understanding where your organisation's actual bottlenecks exist and determining which specific technologies can address them effectively. Many legal teams approach technology adoption backwards, selecting impressive-sounding solutions before clearly mapping their operational challenges.

Critical implementation challenges

Several key challenges emerged from both market analysis and practitioner feedback:

  • Vendor selection complexity: With nearly 9,500 legaltech companies globally, legal teams face decision paralysis when selecting solutions. The abundance of options often obscures fundamental questions about process fit and genuine value creation.
  • Cultural integration: Over 70% of legal firms have incorporated cloud solutions, indicating technical adoption capability. However, the human element – training, change management, and workflow integration – remains the primary implementation challenge.
  • ROI measurement: The lack of systematic ROI measurement across the industry suggests that many organisations are investing in technology without establishing clear success metrics or evaluation frameworks.
  • Jurisdictional complexity: Legal technology must navigate varying regulatory environments, professional standards, and practice traditions across jurisdictions – a challenge compounded by the geographic concentration of development resources.

The DLA Piper process mapping methodology

Recognising this challenge, at DLA Piper we’ve developed a systematic process mapping methodology to help clients identify genuine bottlenecks and analyse intervention opportunities. This approach involves:

  • Pain point discovery: Systematically documenting current workflows to identify time-consuming, error-prone, or frustrating tasks
  • Bottleneck analysis: Quantifying the impact of each identified pain point on productivity, quality, and client satisfaction
  • Technology alignment: Matching specific technological capabilities to mapped process challenges
  • Implementation prioritization: Ranking intervention opportunities based on impact potential and implementation complexity

This methodology ensures that technology adoption serves genuine operational needs rather than pursuing innovation for its own sake.

Conclusion

The legal technology market stands at a critical juncture. While the growth potential remains enormous success will increasingly depend on strategic, process-focused implementation rather than technology-first approaches.

The organisations that will thrive are those that take the time to understand their genuine operational challenges, map their processes systematically, and select technologies that solve real problems rather than impressive demonstrations. In a market crowded with solutions, the competitive advantage belongs to those who can identify what they actually need to solve.

As we move through 2025, the question is no longer whether to adopt legal technology, but how to do so intelligently, measurably and sustainably.

Author: Tommaso Ricci


Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo BardelliCarolina BattistellaCarlotta Busani, Noemi CanovaGabriele Cattaneo, Maria Rita CormaciCamila CrisciCristina CriscuoliTamara D’AngeliChiara D’OnofrioFederico Maria Di Vizio, Enila EleziNadia FeolaLaura GastaldiVincenzo GiuffréNicola LandolfiGiacomo LusardiValentina MazzaLara MastrangeloMaria Chiara MeneghettiDeborah ParacchiniMaria Vittoria PessinaMarianna Riedo, Tommaso RicciRebecca RossiRoxana SmeriaMassimiliano TiberioFederico Toscani, Giulia Zappaterra, Enila Elezi.

Articles concerning Telecommunications are curated by Massimo D’AndreaFlaminia Perna, Matilde Losa and Arianna Porretti.

For further information on the topics covered, please contact the partners Giulio CoraggioMarco de MorpurgoGualtiero DragottiAlessandro FerrariRoberto ValentiElena VareseAlessandro Boso CarettaGinevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer”, the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as Diritto Intelligente, a monthly magazine dedicated to AI, here.

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.

Print