Abstract_Lights_P_0152

6 October 2025

Innovation Law Insights

3 October 2025
Artificial intelligence

AI Risk Assessment Frameworks: Mapping, Classifying and Prioritizing Risks

An AI risk assessment is the process of mapping where risks emerge during the lifecycle of an AI system, classifying them by severity and probability, and prioritizing which ones to mitigate first, all within the compliance framework imposed by the EU AI Act.

Why an AI risk assessment matters

AI brings unique challenges because of its scale, opacity and autonomy. A biased decision in a human process may affect a handful of people, while a biased algorithm can impact thousands in a fraction of a second. That is why an AI risk assessment is essential not only to reduce exposure to liability but also to safeguard the reputation and credibility of organizations.

The EU AI Act makes this explicit. Providers of high-risk systems must establish and maintain a risk management system throughout the entire lifecycle of the AI system. This obligation goes far beyond drafting a document once and filing it away. It requires continuous mapping, classification and prioritization of risks, together with the adoption of technical and organizational measures to mitigate them.

Mapping risks across the AI lifecycle

The first stage of any AI risk assessment is mapping. This means identifying where risks may emerge across all phases of the lifecycle. Risks can materialize at the data collection stage, where low-quality or non-representative data can introduce bias. They can arise during training, where the choice of model and architecture may affect transparency or explainability. They can also emerge during deployment, for instance if an AI system is used in contexts that were never envisaged by its designers.

Mapping must also take into account the different actors involved. The EU AI Act draws a line between providers, deployers, distributors and importers, and obligations will vary accordingly. A company integrating a general-purpose AI model into its product will face different responsibilities than the original developer of the model. A thorough map ensures that accountability is clear, and that risks are not overlooked simply because they sit outside the immediate control of one actor.

Classifying risks under a regulatory lens

Once risks are mapped, they need to be classified. The AI risk assessment cannot stop at a simple list of potential harms; it needs to provide a structured view of their severity and likelihood.

The EU AI Act itself operates through a risk-based logic. It prohibits unacceptable uses of AI, such as manipulative practices or social scoring. It imposes the strictest obligations on high-risk systems, lighter transparency duties on limited-risk systems, and almost no requirements on minimal-risk AI. But while this legal categorization is useful, it is not sufficient for operational risk management.

Companies need to assess severity: how serious would the harm be if it materialized? They need to estimate likelihood: how probable is the event, given current safeguards? And they need to consider detectability: how quickly can the harm be spotted and addressed? A low-probability event that is very difficult to detect can still represent a critical risk.

Classification should also include an analysis of fundamental rights. Discrimination, privacy violations or manipulative practices may not always be captured by technical metrics, but they can have severe legal and reputational consequences.

Prioritizing risks and planning mitigations

Classification is only useful if it leads to prioritization. Resources are limited, and not all risks can be addressed simultaneously. A structured AI risk assessment allows organizations to determine which risks must be eliminated, which require strong mitigation and which can be accepted with monitoring as residual risks.

Here, the EU AI Act sets clear boundaries. Any prohibited AI systems cannot be put on the EU market, put into service or used. High-risk AI systems cannot be deployed without the safeguards required by the Act, such as high-quality data governance, transparency, logging, human oversight and post-market monitoring. These are not optional controls; they are obligations. Limited risk AI systems mut comply with the transparency requirements set out in Article 50 of the EU AI Act.

Beyond regulatory imperatives, organizations should prioritize based on a combination of severity, probability and detectability. A catastrophic harm with a medium probability of occurrence should always come before a minor reputational risk with a high probability.

Mitigation strategies can vary. Technical measures may include bias mitigation techniques, anomaly detection or adversarial testing. Organizational measures can involve human-in-the-loop reviews, escalation procedures or clear accountability structures. Design changes may also be necessary, such as simplifying the model or excluding certain variables that create discriminatory outcomes. What matters is that mitigation plans are documented, tested and updated as systems evolve.

Embedding the EU AI Act compliance overlay

A key point to remember is that the AI risk assessment is not a standalone process. When mandatory under the EU AI Act, it is one element of a broader compliance system that includes technical documentation, conformity assessments, registration obligations and post-market monitoring.

This means that each risk identified must be linked to specific documentation. If bias is identified as a risk, the technical file should show how data governance measures address it. If lack of explainability is flagged, the file must include information on transparency tools or user information provided. Regulators will expect to see a clear chain of reasoning between risks, mitigations and compliance artefacts.

The Act also emphasizes lifecycle management. Risk assessments must be updated when the system is modified, retrained or redeployed in new contexts. They must also be revisited in light of real-world performance, with post-market monitoring feeding back into the assessment process.

Finally, risk management under the AI Act does not operate in a vacuum. Other legislation, from GDPR to product safety and consumer protection rules to intellectual property laws, continues to apply. An effective AI risk assessment must therefore be integrated with wider compliance frameworks.

A roadmap to implementation of an AI risk assessment

How should companies approach this in practice? Based on experience advising clients across different sectors, I suggest the following roadmap:

  1. Portfolio review: identify all AI systems in use, including those integrated through third-party solutions and set up a process to identify any new systems and/or use cases.
  2. Mapping workshop: bring together technical, legal and compliance teams to map risks across the lifecycle.
  3. Classification exercise: evaluate severity, probability and detectability, while aligning with the EU AI Act risk categories.
  4. Prioritization: rank risks, taking into account regulatory imperatives as well as business priorities.
  5. Mitigation planning: assign ownership for each mitigation measure, set timelines and define success indicators.
  6. Documentation: prepare the technical file and other compliance artefacts, ensuring consistency with AI Act obligations.
  7. Monitoring: establish procedures for post-market monitoring, incident reporting and continuous updates to the risk assessment.

This process may seem resource-intensive, but it is also an opportunity. Companies that invest in strong risk assessment frameworks can use them to differentiate in the market, demonstrating that their AI is not only innovative but also trustworthy.

The EU AI Act is changing the way organizations think about risk. An AI risk assessment is no longer an internal formality; it is a legal requirement and a competitive advantage. By mapping, classifying and prioritizing risks, companies can build a defensible compliance framework, avoid regulatory friction and enhance trust with clients, investors and regulators.

The lesson is clear: risk assessments are not a cost to be minimized, but an investment in sustainable AI governance. Those who embrace them early will not only comply with the law but also lead in the marketplace.

Author: Giulio Coraggio

 

Technology

Digital Simplification: Europe and the Digital Omnibus Challenge

With the intent to simplify and harmonize the growing body of European digital regulation, the European Commission has announced the Digital Omnibus, a simplification package that promises to streamline rules on data, cybersecurity, and artificial intelligence. The initiative fits within a regulatory context characterized by regulations of varying "age" and complexity, where the rapid pace of technological evolution meets the need to provide legal certainty to market operators.

What is the Digital Omnibus and when will it arrive?

On September 16, 2025, the European Commission launched a public consultation to gather feedback on how to simplify digital legislation, with a deadline set for October 14, 2025. The intent is to present the Digital Omnibus package by the end of 2025.

The initiative specifically aims to simplify legislation in the areas of data, cybersecurity, and artificial intelligence - three domains that have seen significant regulatory proliferation in recent years. The stated objective is to reduce bureaucratic reporting obligations for businesses and harmonize the digital regulatory framework.

This approach continues the European digital strategy that has produced a series of complex and innovative legislative acts in recent years: from the AI Act, which entered into force in August 2024, to the recently applicable Data Act, including the DORA Regulation and the NIS 2 Directive.

Regulation and technology

To understand the rationale behind the Digital Omnibus, one must consider the peculiar nature of the technology sector. The speed of technological innovation is such that even relatively recent regulations may require adjustments to keep pace with market and technological evolution.

The Data Act, published in December 2023 and applicable from September 2025, is a significant example. Despite being formally recent, the regulation was conceived and drafted in a rapidly evolving technological context, where new business models and emerging technologies may require regulatory clarifications or adjustments.

Different is the case of the AI Act, which entered into force in August 2024 and represents an effectively very recent piece of legislation. Here, reports of "implementation challenges" from businesses may reflect the intrinsic complexity of regulating such a rapidly evolving sector rather than structural deficiencies in the regulation itself.

The European Commission's initiative therefore responds to concrete market demands for greater clarity and procedural simplification. The development process has involved various stakeholders through public consultations on different aspects of the European digital strategy, including the Data Union Strategy, the revision of the Cybersecurity Act, and the Apply AI Strategy. This participatory approach aims to ensure that the proposed simplifications address concrete market needs while maintaining the effectiveness of the intended protections.

The challenges: planning and adaptability

The question of regulatory certainty takes on particular relevance for companies operating in the technology sector. The European digital regulatory landscape presents a distinctive characteristic: the coexistence of the ambition to create a stable legal framework and the need to adapt to a constantly evolving sector. This tension is particularly evident in the case of artificial intelligence, where the speed of technological development poses unprecedented challenges to traditional regulatory approaches.

Companies in the sector therefore find themselves operating in a context where they must plan compliance investments for regulations that, by their nature, may require updates or clarifications. The AI Act case is exemplary: on July 18, 2025, the European Commission published guidelines to clarify key provisions applicable to General Purpose AI models, highlighting how even recent regulations can benefit from additional interpretations.

It is therefore clear that, for sector operators, the question of balance between regulatory flexibility and legal certainty remains central, particularly important for sectors that require long-term investments in research, development, and compliance.

The Digital Omnibus further emphasizes this dynamic and reminds companies navigating the tech sector of the need to develop continuous adaptation capabilities while maintaining strategic coherence in their compliance initiatives.

What strategy for businesses?

Faced with this evolving scenario, companies find themselves facing the difficult task of adapting to constantly changing regulations.

The key to managing this complexity lies in identifying fundamental principles: understanding the regulatory and technological core of the regulations, which aspects of technology the regulation intends to govern, which risks it aims to prevent, and which opportunities it intends to promote. This deep understanding allows for greater confidence in navigating the landscape, aware that interpretative guidelines or additional obligations may emerge, but that the fundamental line will remain stable.

The adaptation process can therefore be structured in three essential phases.

First, it is necessary to assess internal capabilities to determine whether the organization has the necessary competencies to conduct an in-depth analysis of the regulatory core and its technological implications.

Second, it is necessary to identify and prioritize the activities required to ensure compliance, evaluating their urgency and impact.

Finally, it is necessary to develop a realistic compliance program with timelines that take into account all the parties involved in the organization, sufficiently flexible to adapt to any regulatory clarifications.

Author: Edoardo Bardelli

 

Blockchain and Cryptocurrency

On 25 September 2025, nine leading European banking groups - ING, Banca Sella, KBC, Danske Bank, DekaBank, UniCredit, SEB, CaixaBank, and Raiffeisen Bank International - announced the creation of a consortium for the issuance of an electronic money token (hereinafter, “EMT” or “stablecoin”) whose value will be directly and stably pegged to the euro, in compliance with Regulation (EU) 2023/1114 on markets in crypto-assets (hereinafter, “MiCAR”).

The launch, expected in the second half of 2026, represents the first pan-European initiative promoted directly by credit institutions, with the aim of introducing a regulated, secure, and scalable digital payment instrument capable of competing with U.S. initiatives and strengthening the European Union’s strategic autonomy in the payments sector.

The stablecoin will be issued through a company based in the Netherlands that will seek authorization as an Electronic Money Institution (hereinafter, “EMI”) under the supervision of De Nederlandsche Bank (hereinafter, “DNB”). In this way, the project seeks to combine the prudential and governance requirements set out in MiCAR, Directive (EU) 2015/2366 on payment services (hereinafter, “PSD2”), and Regulation (EU) No 575/2013 on prudential requirements (hereinafter, “CRR”), with the characteristics of distributed ledger technology (hereinafter, “DLT”).

The infrastructure promises instant, low-cost, 24/7 payments, with potential applications including cross-border transfers, monitoring of transactions in digital assets, and greater traceability of supply chains.

However, some critical issues remain. The proliferation of stablecoins in the European market may represent an opportunity in terms of competition, but also a risk of fragmentation, amplified by the coexistence of other digital payment instruments such as instant transfers, regulated by Regulation (EU) 2024/886 (hereinafter, “IPR”). Furthermore, actual adoption by consumers and businesses remains uncertain, given the limited familiarity of users with the instrument.

Despite such uncertainties, the initiative confirms the direction of the European banking and decentralized industry: the stablecoin is expected to rank among the three main methods of payment in the coming years, building its diffusion on speed, efficiency, transparency, and traceability.

  1. A European Banking Consortium between MiCAR and Digital Sovereignty

The project promoted by nine leading European credit institutions marks a clear break from previous stablecoin experiences. It is not an initiative driven by financially unregulated tech operators, but rather an instrument conceived from the outset within a binding regulatory framework, aligned with MiCAR and PSD2.

The decision to establish the company in the Netherlands and to apply for authorization as an EMI under the supervision of DNB carries a precise meaning: ensuring that the new EMT is treated, from its very issuance, as a regulated payment instrument subject to capital, governance, and consumer protection requirements comparable to those imposed on traditional market operators.

The added value of the project does not lie solely in its technical-legal aspects. The decision of nine banks to converge on a common standard aims to strengthen Europe’s digital sovereignty, reducing dependence on extra-EU stablecoins, largely of U.S. origin.

This constitutes a complementary, rather than alternative, response to the digital euro project promoted by the European Central Bank (hereinafter, “ECB”): while the latter remains an institutional initiative still under development, the consortium’s stablecoin is a private instrument ready to establish itself in the short term as a means of payment governed by Union law.

The symbolic impact is equally relevant: the adoption of DLT is no longer confined to experimental projects but becomes an integral part of Europe’s financial infrastructure. From this perspective, the banking stablecoin represents the baptism of fire for a phase in which digital innovation and prudential regulation combine to define a European model of regulated payments.

  1. Opportunities and Risks

Such an initiative raises crucial questions about the future of competition in digital payments.

For the first time, the European market will face a mosaic of instruments and products governed by different rules - EMTs, instant transfers, and, in the future, the digital euro - with the concrete risk that innovation could result in fragmentation rather than efficiency.

From a competitive perspective, the presence of multiple stable coins could stimulate competition in terms of cost and speed, but at the same time create a new form of market segmentation, in which users and businesses are forced to choose between circuits that are not always interoperable. This plurality of standards risks undermining MiCAR stated objective.

Competition with instant payments adds an additional layer of complexity. Instant transfers regulated by the IPR already allow real-time transfers at European level today, without requiring users to adopt new technologies. For a banking stable coin to become truly competitive, it must therefore offer added value: programmability of payments, automation of flows, or cross-border solutions not covered by SEPA schemes. Otherwise, the risk is that it will remain confined to a niche, unable to scale.

The central issue remains public adoption. Despite growing industry attention, awareness of stablecoins remains low, even among financial operators most attuned to innovation. The recent Communication by the Bank of Italy on EMTs clearly demonstrates this: without financial education efforts and without building trust in new regulated instruments, supply risks outpacing demand. In this regard, the legitimacy derived from the direct involvement of banks can be decisive: trust in traditional intermediaries may transform the stablecoin from a “technological experiment” into an everyday instrument.

Europe, therefore, risks multiplying circuits without reinforcing stability.

  1. Regulatory Requirements and Practical Profiles

The most significant feature of the European banking stablecoin is not the technology on which it is built, but the way in which it is embedded in the Union’s regulatory framework.

The initiative allows no shortcuts: the institutions involved, should they intend to provide services involving the EMT, will have to comply simultaneously with the requirements of MiCAR and PSD2, thus assuming prudential and governance obligations typical of both crypto-asset service providers (“CASPs”) and payment institutions (“PIs”) as well as EMIs.

The first novelty lies in the cumulative nature of prudential safeguards: it will not be possible to choose whether to apply MiCAR or PSD2, but both must be observed, ensuring own funds and insurance coverage adequate to cover the full spectrum of risks related to the provision of stablecoin services. The same logic applies to ownership structures and corporate officers, who will be subject to broader requirements than those already provided under MiCAR: integrity, propriety, independence of judgment, and effective availability of time, in line with traditional banking law.

Operationally, the focus is on customer protection.

From 2026, payments in the banking stablecoin will have to be secured by strong customer authentication (“SCA”) procedures analogous to those applied to traditional electronic payments, with the operator bearing direct responsibility in cases of non-application.

This is complemented by the obligation of periodic reporting on fraud, extending to EMT services the regime already applicable to classic payment instruments. By contrast, the rules on open banking are excluded, deemed incompatible with the logic of DLT infrastructures: a targeted adjustment that avoids technical distortions without reducing user protection.

The overall outcome is a regulatory framework in which the European banking stablecoin stands at the same level of reliability and accountability as conventional payment instruments. For banks, this implies assuming reinforced compliance and supervisory obligations; for the market, it means having access to a digital asset that, for the first time, combines technological innovation with prudential discipline free of grey areas.

In conclusion, a European banking stablecoin would not merely represent an additional piece in the puzzle. For the first time, a digital instrument enters the core of financial regulation, with capital, governance, and customer protection obligations comparable to those of traditional providers.

The EU thus takes a decisive step toward strategic autonomy in payments, balancing innovation and stability. Those who arrive prepared will not only comply with the rules but may also lead to the birth of a new European standard in digital payments.

Author: Andrea Pantaleo & Giulio Napolitano

 

Intellectual Property

Trade Secrets: biotech company sues pharmaceutical multinational for misappropriation of trade secrets

A biotechnology company specializing in mRNA-based therapeutics has filed a civil complaint before the U.S. District Court for the Southern District of California against a well-known pharmaceutical multinational company, its newly acquired subsidiary, which is a competitor of the plaintiff, and a former employee and a former collaborator. The complaint outlines how the two former collaborators allegedly transferred confidential information and trade secrets relating to a proprietary lipid nanoparticle technology to the competitor, which subsequently used the proprietary information in patent filings. The misappropriated trade secrets were also used as part of a technology portfolio to successfully entice the pharmaceutical company to conclude the acquisition.

The misappropriation claim

The biotech company alleges that the competitor's core intellectual property, specifically its lipid nanoparticle technology, was unlawfully derived from trade secrets disclosed by former collaborators of the plaintiff, who were subsequently hired by the competitor. Indeed, the complaint asserts that the two individuals transferred confidential information in violation of their contractual obligation and, shortly after starting working for the competitor, they allegedly modified the proprietary information and incorporated it into a patent application, which named the former collaborators as inventors.

The claim clearly outlines how the allegedly misappropriated technology qualifies for trade secrets protection under both federal (Defend Trade Secrets Act) and state (California Uniform Trade Secrets Ac) law. Indeed, the new generation of lipids employed were different from, and not generally known to or readily ascertainable by, others, including competitors. At all relevant times, the information has and has had independent economic value, actual and potential, as a result of not being generally known or readily ascertainable through proper means or from generally available, public sources. Lastly, the information was protected with reasonable security measures, also by not including it in published patent applications. The company uses, and has used over the years, adequate security measures to protect its trade secrets, including confidentiality agreements with its personnel and third parties, industry standard physical and electronic security measures in its facilities, protecting sensitive information from unauthorized use and disclosure.

Among the remedies sought, the plaintiff requests injunctive relief to prevent further use or disclosure of the trade secrets, correction of inventorship on a granted U.S. patent, and declaratory judgment affirming its ownership of the disputed intellectual property. It also seeks compensatory and exemplary damages, disgorgement of unjust enrichment, reasonable royalties and reimbursement of attorneys' fees.

The involvement of the acquiring company and possible consequences

The action also involves also the multinational company which acquired the plaintiff's competitor, on the basis that it benefited from the misappropriation of trade secrets and proprietary technology belonging to the plaintiff. Indeed, the claim states that the competitor's acquisition value, which was reportedly over $2 billion, consisted primarily of trade secrets the plaintiff's personnel had misappropriated. The acquiring company knew or had reason to know that the trade secrets had been acquired and obtained by improper means, as, during the due diligence process, it reasonably had access to information that could have flagged the issue, namely a letter dated April 2024 where the plaintiff raised its initial concerns. Notwithstanding, the acquiring company proceeded with the transaction, thereby contributing to the alleged misconduct.

Under both US federal and Californian state law, a party that knowingly uses or profits from misappropriated trade secrets may be held liable. The plaintiff argues that the acquirer's failure to investigate and address these issues prior to closing constitutes willful disregard, justifying its inclusion as a defendant in the action.

The acquiring company now risks substantial legal, financial, and reputational consequences if the allegations are substantiated. Should the Court find that the company knowingly used or benefitted from the misappropriated trade secrets, serious court-ordered injunctive relief could be provided, including restrictions on the use of key intellectual property, which was central to the acquired business.

Financially, the company may be required to pay compensatory and exemplary damages, disgorge profits derived from the disputed technology, and cover attorneys' fees. Reputationally, the company risks diminished investments, regulatory scrutiny, and disruption of strategic partnerships, particularly if it is found to have disregarded red flags during due diligence or failed to implement adequate compliance protocols.

The importance of due diligence on trade secrets in corporate transactions

In acquisition transactions, particularly within the pharmaceutical and biotech sectors, rigorous due diligence concerning trade secrets and related contractual obligations is essential to mitigate legal and financial risks. Acquirers must thoroughly assess the origin, ownership, and protection status of proprietary technologies, including verifying the existence and enforceability of confidentiality agreements, consulting arrangements, and intellectual property assignments.

Failure to identify potential misappropriation or contractual breaches prior to closing can result in post-acquisition litigation, reputational damage, and impairment of key assets. Transparent disclosure and legal vetting of intellectual property and trade secret provenance should be a core component of any acquisition strategy to ensure compliance, preserve value, and safeguard against future claims.

Author: Chiara D'Onofrio

 

Legal Design

Legal Design Tricks - Little tips to use legal design in your daily activities

Trick #10: Visual Design in Legal Documents - When Pictures Speak Louder Than Words!

You have learned how to simplify and organize content. Now discover how to make it truly clear and memorable using simple, effective visual tools.

Why use visual elements in legal documents?

  • They guide the eye, making documents easier to read
  • They simplify complex concepts and reduce misunderstandings
  • They organize, communicate, and compare data and information more intuitively
  • They add communicative value, helping readers grasp legal concepts faster and more efficiently

Highlight and simplify with ICONS and TABLES

  • Icons → make text more readable and guide attention
    When to use them: to signal prohibitions, obligations, or operational actions in internal policies
    Example: 🔒 privacy, 🕒 schedules, 💻 IT devices
  • Tables → help compare data and information clearly
    When to use them: in contracts or policies to show obligations, deadlines, or options
    Example: a table summarizing the parties’ obligations in a contract

Visualize processes and data with CHARTS, TIMELINES, and FLOWCHARTS

  • Charts → turn numbers into instant insights
    When to use them: legal reports, risk analysis, KPI tracking
    Example: a bar chart showing average contract approval times across different offices
  • Timelines → display procedural steps and sequences
    When to use them: to illustrate internal workflows or contractual milestones
    Example: HR process timeline (onboarding, hiring, training)
  • Flowcharts → explain complex steps linearly
    When to use them: compliance, internal procedures, escalation paths
    Example: reporting workflow for internal compliance

Provide context and overview with MAPS and INFOGRAPHICS

  • Maps → represent relationships or flows
    When to use them: to illustrate contractual relationships, data flows, or corporate activities
    Example: a map of data processing and flows
  • Infographics → summarize and simplify procedures, rules, or legal concepts
    When to use them: internal communications and training
    Example: a poster on AI usage restrictions/obligations for employees

Rules for using visual elements effectively

  1. Simplicity → less is more: every element should have a clear purpose
  2. Clarity → captions and titles must explain the visual
  3. Consistency → uniform style for icons, colors, and formats
  4. Readability → pay attention to fonts, contrasts, and sizes
  5. Relevance → choose the right visual for the content (e.g., don’t use a pie chart for a timeline)

Did you know?

Studies show that people remember only 10% of what they read but up to 65% of what they see visually. In legal documents, a well-chosen icon or diagram can be far more memorable than an entire paragraph!

What’s next?

You are not a designer but want to create effective visuals - In the next episode of Legal Design Tricks, we will explore the digital tools you can use to implement Legal Design!

Author: Deborah Paracchini

 


Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo BardelliCarolina BattistellaNoemi CanovaGabriele Cattaneo, Giovanni ChiecoMaria Rita CormaciCamila CrisciCristina CriscuoliTamara D’AngeliChiara D’OnofrioFederico Maria Di Vizio, Enila EleziLaura GastaldiVincenzo GiuffréNicola LandolfiGiacomo LusardiValentina MazzaLara MastrangeloMaria Chiara MeneghettiGiulio Napolitano, Deborah ParacchiniMaria Vittoria PessinaMarianna Riedo, Tommaso RicciRebecca Rossi, Dorina Simaku, Roxana SmeriaMassimiliano TiberioFederico Toscani, Giulia Zappaterra.

Articles concerning Telecommunications are curated by Massimo D’AndreaFlaminia Perna, Matilde Losa and Arianna Porretti.

For further information on the topics covered, please contact the partners Giulio CoraggioMarco de MorpurgoGualtiero DragottiAlessandro FerrariRoberto ValentiElena VareseAlessandro Boso CarettaGinevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer,” the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as Diritto Intelligente, a monthly magazine dedicated to AI, here.

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.

Print