
25 July 2025 • 20 minute read
Innovation Law Insights
25 July 2025Artificial Intelligence
AI Act GPAI Guidelines – What businesses need to know before August 2025
The European Commission’s GPAI guidelines under the AI Act are here – and they’re about to change how general-purpose AI models are developed, distributed, and regulated in the EU. If you work with large language models, generative AI systems, or provide AI tools to customers in the EU, these guidelines define the rules you’ll need to follow from 2 August 2025.
This article walks you through the key definitions, obligations, exemptions, and enforcement timelines so you’re not caught off guard.
Why the GPAI Guidelines Under the AI Act Matter
The GPAI guidelines AI Act package is a milestone: for the first time, the EU has clarified how it will interpret and enforce the obligations for general-purpose AI (GPAI) providers. This guidance applies to:
- Foundation model developers
- API providers
- Open-source AI distributors
- Enterprises integrating GPAI into downstream tools
- And any company modifying or fine-tuning base models
Some questions to be addressed to your business:
- Does your current AI strategy assume that the rules only apply to high-risk systems, not general-purpose models?
- Are you sure you’re not a GPAI provider under the new definitions?
What counts as a general-purpose AI model?
The AI Act defines a GPAI model as one that shows “significant generality” and can “competently perform a wide range of distinct tasks.” That sounds vague – but the GPAI guidelines provide a concrete threshold.
A model is presumed to be a GPAI if:
- it was trained using more than 10²³ FLOPs;
- it can generate language (text/audio) or text-to-image/video.
This is a significant increase from the originally proposed 10²² FLOPs. According to the EU Commission, 10²³ reflects the typical compute required to train a model with at least 1 billion parameters. However, models trained for narrow purposes – even with high compute – are not considered GPAI. For example, a speech-to-text model trained with 10²⁴ FLOPs is out of scope if it only performs that task.
Some questions that are not addressed by the AI Act GPAI Guidelines:
- Is using FLOPs as a proxy for generality too simplistic?
- Should the EU consider alternative benchmarks, like real-world performance across domains?
Who is a GPAI provider?
The GPAI guidelines clarify that you’re a provider if you either:
- develop a GPAI model yourself; or
- have it developed and place it on the EU market under your name, whether for free or for payment.
It doesn’t matter whether the model is distributed via:
- APIs
- Software libraries
- Public repositories
- Cloud services
- Mobile or web apps
Even if your company is outside the EU, these obligations apply once your model enters the EU market. You’ll need to appoint an EU representative if you’re not based in Europe.
Some questions that are not addressed by the AI Act GPAI Guidelines:
- If a US-based model is used inside an EU-deployed product, who’s the liable provider?
- How will enforcement work for global models accessed by EU users?
Key obligations for GPAI providers
Starting 2 August 2025, GPAI providers must:
- Maintain up-to-date technical documentation. This must cover the model’s architecture, training, testing, and evaluation procedures (Article 53(1)(a)).
- Provide information to downstream users, especially those integrating the model into their AI systems (Article 53(1)(b)).
- Implement a copyright policy. This is to ensure compliance with EU copyright law, including opt-outs under Article 4(3) of Directive 2019/790 (Article 53(1)(c)).
- Publish a training data summary. This public summary must outline the content used to train the model (Article 53(1)(d)).
Some questions that still need to be addressed:
- How specific must the data summary be to comply?
- Will copyright compliance force developers to remove training data retroactively?
Additional rules for GPAI models with systemic risk
Some GPAI models will be subject to stricter obligations due to their potential impact on public safety, rights, or the internal market.
Your model is presumed to have systemic risk if either:
- it was trained with more than 10²⁵ FLOPs; or
- it matches the capabilities of the most advanced models on the market
If so, you must:
- conduct adversarial testing and model evaluations;
- track and report serious incidents;
- implement robust cybersecurity protections; and
- notify the AI Office before and during training.
Some questions that are not addressed by the AI Act GPAI Guidelines:
- Should models with systemic risk be subject to third-party audits?
- How will the Commission keep the FLOPs threshold aligned with rapidly evolving model architectures?
Fine-tuning or modifying models? You may be the new provider
The GPAI guidelines also address downstream modifiers – companies or individuals who adapt a base model (eg via fine-tuning, quantization, or distillation).
If your modification uses more than one-third of the compute that trained the original model, you become a provider of a new GPAI model.
This means:
- you’re fully subject to the AI Act; and
- you must comply immediately – no two-year grandfathering.
Some questions to be addressed to your business:
- How can downstream actors, including your company, estimate original compute use if it’s not disclosed?
- Will this discourage valuable innovation and experimentation?
The open-source model exception: Not as open as you think
Not all open-source models are exempt.
To qualify, your model must:
- be released under a free and open-source license allowing access, use, modification, and redistribution;
- be non-monetized; and
- include public access to model weights, architecture, and usage information.
What disqualifies you:
- Restricting usage to research or non-commercial purposes.
- Paywalls, ads, or usage fees.
- Requiring commercial licenses for scale.
Even exempt models must still:
- comply with copyright rules; and
- publish a training data summary.
Some questions that aren’t addressed by the AI Act GPAI Guidelines:
- Will companies be forced to relicense or withdraw open-source models?
- Can the open-source ecosystem survive without sustainable monetization options?
Grandfathering clause: Transition time for existing models
If you’ve already placed a GPAI model on the EU market before 2 August 2025, you have until 2 August 2027 to comply.
No need to retrain or “unlearn” the model – provided you either:
- can’t retrieve the training data; or
- retraining would impose disproportionate burden.
You must disclose and justify this in your documentation.
Some questions to be addressed to your business:
- What changes to the GPAI model make it a “new” that can no longer rely on the grandfathering clause?
- Will this clause open the door to selective transparency?
- Could competitors or regulators challenge the scope of these justifications?
GPAI Code of Practice: A voluntary path to compliance
The GPAI Code of Practice, released on 10 July 2025, provides a voluntary route to demonstrate compliance with Articles 53 and 55.
Signing the code offers benefits:
- Regulatory trust and reduced scrutiny
- Potentially lower fines
- Public perception of responsible development
However:
- You must implement the measures – not just sign.
- Non-compliance with the code could hurt credibility.
Some questions to be addressed to your business:
- Will signing the code become de facto mandatory?
- Should codes of practice eventually be replaced with formal standards?
Key enforcement deadlines
- 2 August 2025: GPAI provider obligations apply
- 2 August 2026: Fines and enforcement powers become active
- 2 August 2027: Deadline for legacy models to comply
Fines can reach EUR15 million or 3% of global turnover – whichever is higher.
Some questions that aren’t addressed by the AI Act GPAI Guidelines:
- Will the AI Office prioritize enforcement by sector, scale or risk?
- How will the EU coordinate with non-EU regulators on cross-border compliance?
Final thoughts: GPAI compliance as a strategic advantage
The GPAI guidelines under the AI Act are not just about regulation – they reflect a broader shift toward responsible AI development. Compliance is no longer a back-office legal issue. It’s a strategic lever for trust, differentiation, and investment.
Organizations that prepare now – by mapping their models, documenting training processes, and aligning with the Code of Practice – will not only avoid regulatory pitfalls but position themselves as leaders in the new AI economy.
Author: Giulio Coraggio
Data Protection and Cybersecurity
Cybersecurity information-sharing agreement
Article 17 of Legislative Decree No. 138/2024 (the NIS2 Decree) establishes that essential and important entities, and third parties such as service providers, can voluntarily exchange information on cybersecurity. This provision has been subject to an extensive interpretation by the National Cybersecurity Agency (ACN), which has provided clarifications through the FAQ published on its institutional website.
Content of Article 17 of the NIS2 Decree
Article 17 establishes that entities falling within the scope of the NIS2 legislation can voluntarily share a wide range of information relating to cybersecurity. This includes:
- information on cyber threats, near-incidents, and vulnerabilities
- adversarial techniques, procedures, and tactics
- indicators of compromise
- specific information on threat actors
- cybersecurity alerts and recommendations concerning the configuration of cybersecurity tools
The purpose of this information sharing is twofold:
- To prevent, detect, and respond to cyber incidents, including containment and recovery in case of compromise.
- To increase the level of cybersecurity by raising awareness of risks and hindering the spread of threats, supporting defence activities, disclosing vulnerability information, and promoting collaborative research on cyber threats between public and private entities.
Essential and important entities have to, during the annual update of information via the ACN portal, notify the authority of any adherence to or termination of such agreements. This allows the authority to maintain an updated and structured overview of the level of informational cooperation in the field of cybersecurity.
ACN clarifications on Article 17 of the NIS2 Decree
ACN recently published new clarifications through FAQ ACI.4 available in the “Annual Update” section of its website. According to ACN, “In the context of the progressive implementation of the provisions of the NIS2 Decree, and pending the establishment of a Union-level consensus, information sharing activities also include the exchange of information occurring within the context of procurements that concern, even partially, cybersecurity services.”
The same FAQ provide a non-exhaustive list of contractual types which, according to ACN's interpretation, constitute information-sharing agreements subject to notification. These include contracts concerning the following services:
- NOC (Network Operation Centre)
- MDR (Managed Detection and Response)
- SOC (Security Operation Centre)
- CSOC (Cyber Security Operation Centre)
- CERT (Computer Emergency Response Team)
- VA/PT (Vulnerability Assessment and Penetration Testing)
- Red Teaming
- Cyber Threat Intelligence
Conversely, contracts that do not concern cybersecurity services – and in which the supplier, either voluntarily or through contractual clauses, informs the client of cybersecurity events of potential interest – are not considered information-sharing agreements and are excluded from the notification obligation.
With regard to the agreements deemed relevant, ACN specifies that it will be sufficient to notify an extract of such agreements, indicating:
- the parties to the agreement
- the subject matter of the agreement
- the clauses providing for the exchange of cybersecurity information and those governing the respective obligations of the parties
In light of the progressive implementation of the NIS Decree, ACN also clarifies that for the 2025 annual update (due by 31 July 2025), only agreements that are in force and signed on or after 16 October 2024 – the date of entry into force of the NIS2 Decree – must be notified. Agreements signed before this date are excluded.
Concerns about ACN’s interpretation
ACN’s interpretation raises some concerns, both from a systematic standpoint and regarding alignment with the European regulatory framework. In particular, the broad interpretation:
- isn’t currently reflected in similar positions by other national authorities in Europe (though only a limited number of member states have already transposed the NIS2 Directive);
- doesn’t appear consistent with the literal wording of Article 17, which refers to a voluntary exchange of cybersecurity information and not to a structural or contractual obligation arising from specialized services;
- seemingly contradicts ACN's own FAQ ACI.2, where information sharing is characterized as a best practice, rather than a mandatory or essential element for ensuring an adequate level of cybersecurity.
On the contrary, many – if not all – of the services listed in FAQ ACI.4 (SOC, CSOC, MDR, VA/PT, Red Teaming) appear to be professional services by nature, inherently designed to strengthen cybersecurity defences. So it’s difficult to classify them as voluntary information-sharing activities within the meaning of Article 17 of the NIS2 Decree, especially in the case of essential and important entities for whom these services represent a structural component of their cybersecurity posture.
What to do?
Given the current uncertainty surrounding whether certain types of contracts fall under the definition of information-sharing agreements, it’s advisable to conduct a thorough review of contracts belonging to the categories and signed after 16 October 2024. Only in the presence of such contracts should the content be analysed to identify only those clauses that provide for the exchange of cybersecurity information, and those governing the reciprocal obligations of the parties with respect to such sharing.
Author: Giulia Zappaterra
Intellectual Property
SHEIN under scrutiny by the EU Commission: Investigated for violations of EU consumer protection laws
Following a joint EU-wide investigation, the European Commission and the Consumer Protection Cooperation Network (CPC) – which includes national consumer authorities from some member states – have formally notified the online marketplace SHEIN of a series of commercial practices deemed non-compliant with EU consumer protection laws.
The action, coordinated by the EU Commission with the active involvement of authorities from Belgium, France, Ireland and the Netherlands, has revealed multiple concerns with the shopping experience offered on the platform.
The reported infringements concern the entire online purchasing process and include:
- Misleading discounts: SHEIN is accused of advertising price reductions based on misleading reference prices, creating the illusion of better deals than those actually offered.
- Aggressive sales tactics: The platform allegedly pressures consumers into making immediate purchases through deceptive techniques, such as false purchase deadlines.
- Misleading information on return and refund rights: Consumers are reportedly given incomplete or incorrect information about their rights, and experience difficulties exercising them.
- Deceptive product labels: Some items are marketed as having special features which are, in fact, legally required standards.
- Greenwashing: Overstated or misleading environmental claims have been detected in relation to product sustainability.
- Limited contact accessibility: Consumers reportedly face obstacles in reaching out to SHEIN for inquiries or complaints.
The CPC network has also requested further clarifications from SHEIN to assess compliance with additional EU legal obligations, such as ensuring transparency of product rankings, reviews, and ratings, and clarifying the contractual roles and responsibilities between SHEIN and third-party sellers. Particular scrutiny is being applied to transactions involving non-professional third-party sellers, which may limit consumer protection.
This initiative by the CPC network complements the ongoing investigation under the Digital Services Act (DSA), led by the EU Commission. Both actions aim to ensure a safe and trustworthy online environment where consumer rights are fully protected.
SHEIN has been given one month to respond to the CPC's findings and submit concrete commitments to address the issues raised. If the response is deemed insufficient, national authorities may proceed with enforcement measures, including the imposition of fines based on the company’s turnover in the concerned member states.
Author: Carolina Battistella
From UPC to EPO: Decision on a consistent interpretation of patent claims
On 18 June, the Enlarged Board of Appeal (EBA) of the EPO, upon referral from the Technical Board of Appeal (TBA), issued a long-awaited decision concerning the interpretation of patent claims and the role of descriptions and drawings. The reference is to G 1/24, available at this link.
Specifically, the referring body raised two questions, which were considered admissible: what legal basis should be adopted for interpreting claims when assessing patentability under Articles 52–57 EPC and whether, in this context, the description and drawings must be always consulted or only when the wording of the claims is ambiguous.
These questions stem from a longstanding conflict in case law, particularly regarding the role of Article 69 EPC and Article 1 of its Protocol on Interpretation, which states that the description and drawings must be taken into account when interpreting the claims. The issue is whether the interpretation should occur in all cases, or only when the claims are ambiguous.
On this point, the Enlarged Board first reiterated the well-established principle that claims are the starting point for determining whether an invention is patentable. The Board then clarified that the description and drawings serve as essential interpretative aids in defining the scope of protection conferred by a patent and always have to be taken into account – not only in cases of interpretative uncertainty arising from the literal wording of the claims.
By doing so, the Board rejected the line of case law that limited reference to the description to situations involving linguistic ambiguity. This restrictive approach was deemed by the EBA to be inconsistent with case law of many national courts and, most notably, with that of the Unified Patent Court of Appeal, which most recently addressed the issue in Nano string Technologies v. 10x Genomics.
The impact of this ruling is expected to be significant and clearly marks a move towards greater harmonization between EPO and UPC in favour of a more uniform interpretation of the law and enhanced legal certainty for stakeholders in the patent field.
Author: Laura Gastaldi
Life Sciences
Italian Ministry of Health updates advertising guidelines for medical devices
As of 27 June 2025, the Italian Ministry of Health has published the new Guidelines on healthcare advertising for medical devices, in vitro diagnostic devices, and medical-surgical equipment on its official website. The document, dated 4 April 2025, provides long-awaited updates designed to clarify existing rules and establish operational measures for conducting advertising in line with technological developments and digital communication trends.
A simplified and updated framework
The core principles remain unchanged: transparency, accuracy, and truthfulness of advertising messages, consistent with Article 7 of Regulations (EU) 2017/745 (MDR) and (EU) 2017/746 (IVDR). The main change lies in introducing operational guidelines and specific provisions for online communication and social media, which have become critical marketing tools for the sector.
The new guidelines replace the previous ones (2010-2020), streamlining the framework and introducing modern tools to address the digitalization of healthcare advertising.
The Ministry of Health developed the document in cooperation with leading industry associations to achieve two objectives:
- Simplify compliance for operators through clear and uniform rules.
- Enhance public health protection by strengthening oversight and improving traceability of advertising content online.
Key updates: Procedures and social media
The guidelines set out the procedure for obtaining prior authorization, specifying the required documentation and payment methods, and confirm that companies can extend existing authorizations to additional media, provided the content remains identical. They also reiterate the rules for information intended solely for healthcare professionals, which doesn’t require prior authorization, provided access is restricted to reserved areas through disclaimers, pop-ups, or equivalent verification systems. The guidelines reaffirm principles established by case law regarding the use of testimonials and clarify which advertising scenarios for medical-surgical equipment don’t require prior authorization (while referring to the Ministry Decree of 26 January 2023 for medical devices).
A significant portion of the document focuses on online advertising, including corporate websites, product-specific websites, thematic portals, and social media. The annex includes operational instructions for well-known platforms (Facebook, Instagram, YouTube) and, for the first time, TikTok. The guidelines also allow the use of other social networks, subject to prior approval by the Ministry.
The inclusion of TikTok reflects its growing importance in marketing strategies. But to prevent uncontrolled virality and inappropriate interactions, the Ministry of Health imposes strict conditions. Companies can use three types of profiles:
- Corporate/institutional profiles: allowed without prior authorization, provided they don’t include any advertising for medical devices, IVDs, or medical-surgical products.
- Product or brand profiles: permitted only if all interactive functions are disabled (comments, duets, stitches, sharing, likes and emojis). Every piece of content, including scientific content, must receive prior authorization, except for products not subject to advertising approval.
- Thematic corporate profiles: permitted without prior authorization if they contain no advertising content; otherwise, the same restrictions as product or brand profiles apply.
In addition to single posts, companies can now request approval for advertising campaigns, which may include up to ten posts (three of which can be videos). Each campaign requires a separate fee. A critical requirement remains the ban on interactive or viral functions, ensuring that advertising remains transparent, controlled, and free from misuse.
A necessary update
With these new guidelines, the Ministry of Health aims to modernize the regulatory framework for healthcare advertising and align it with digital communication trends and the growing importance of social platforms in promoting medical devices.
The update simplifies some procedures and provides clearer operational instructions suited to today’s context. But the approach is still partly anchored in traditional control logic, signalling an ongoing need to balance strict oversight with the dynamics of a fast-evolving digital ecosystem.
Ultimately, practice will determine whether these rules can truly keep pace with change or merely follow it from behind.
Author: Nicola Landolfi and Nadia Feola
Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo Bardelli, Carolina Battistella, Carlotta Busani, Noemi Canova, Gabriele Cattaneo, Maria Rita Cormaci, Camila Crisci, Cristina Criscuoli, Tamara D’Angeli, Chiara D’Onofrio, Federico Maria Di Vizio, Enila Elezi, Nadia Feola, Laura Gastaldi, Vincenzo Giuffré, Nicola Landolfi, Giacomo Lusardi, Valentina Mazza, Lara Mastrangelo, Maria Chiara Meneghetti, Giulio Napolitano, Deborah Paracchini, Maria Vittoria Pessina, Marianna Riedo, Tommaso Ricci, Rebecca Rossi, Roxana Smeria, Massimiliano Tiberio, Federico Toscani, Giulia Zappaterra, Enila Elezi.
Articles concerning Telecommunications are curated by Massimo D’Andrea, Flaminia Perna, Matilde Losa and Arianna Porretti.
For further information on the topics covered, please contact the partners Giulio Coraggio, Marco de Morpurgo, Gualtiero Dragotti, Alessandro Ferrari, Roberto Valenti, Elena Varese, Alessandro Boso Caretta, Ginevra Righini.
Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.
You can learn more about “Transfer”, the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as Diritto Intelligente, a monthly magazine dedicated to AI, here.
If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.