Innovation Law Insights
22 May 2025AI Law Journal
Diritto Intelligente – May issue available now
The May issue of the AI Law journal published by DLA Piper’s Italian Intellectual Property and Technology team is now available with the latest updates on legal challenges of artificial intelligence law. Read it here.
Artificial Intelligence
AI Act literacy: European Commission’s Q&A raise the bar beyond simple training
AI Act literacy takes center stage in the latest regulatory update, as the European Commission has published dedicated Q&A clarifying the literacy requirements under Article 4 of the AI Act which have applied since February 2, 2025.
This new guidance confirms what legal and compliance professionals suspected: meeting the AI Act’s obligations means going far beyond generic training modules – it requires a structured, risk-based governance approach to using AI across organizations.
The key takeaways from the Q&A on AI Act literacy
The FAQ break down how organizations have to approach AI literacy. It’s no longer sufficient to host an annual awareness session. The AI Act demands tailored, ongoing initiatives that reflect the real-world responsibilities and risks associated with each role interacting with AI systems.
Here’s what the European Commission outlines:
- AI literacy means capability – not just awareness. Staff must understand, use and evaluate AI systems responsibly, including their limitations and potential harms.
- Training must be role-specific. Developers, users, and deployers each require different levels and types of AI education, aligned with their technical and business responsibilities.
- AI literacy applies to external actors, including third-party service providers and contractors, not just internal teams.
- The obligation under Article 4 applies even in non-high-risk AI scenarios if the system affects people’s rights, safety, or essential services.
- Organizations must assess knowledge, responsibilities, and risk exposure – not just deliver a one-size-fits-all course.
AI Act literacy means governance, not just compliance
The FAQ make it clear: AI Act literacy is a cornerstone of AI governance, not a stand-alone initiative. To comply, organizations need to embed literacy into a wider organizational model, which includes:
- Internal governance rules for designing, deploying, and overseeing AI systems.
- Policies and procedures aligned with the AI Act’s risk classification (eg minimal, limited, high risk).
- Continuous improvement processes ensuring systems remain explainable, transparent, and aligned with ethical and legal standards.
In other words, training is only the starting point. Organizations need to create a culture of accountability and informed AI usage, involving all stakeholders across the AI lifecycle.
Why AI Act literacy should be a priority now
Publishing the FAQ sends a strong message: AI Act literacy is not a box to tick – it’s a legal requirement with real operational consequences. Regulators will expect evidence of structured training, clarity on who was trained and how, and documented policies ensuring that literacy is part of the organization’s AI risk management framework.
To respond effectively, businesses should:
- conduct a literacy and risk assessment across all AI-related functions;
- develop role-specific training plans supported by legal, compliance, and technical leadership; and
- establish a central oversight function (such as a Chief AI Officer or AI Compliance Officer) to manage literacy and governance efforts.
Final thoughts
AI Act literacy is more than a regulatory buzzword – it’s a call to action for organizations across the EU and beyond. With the European Commission’s FAQ now in place, companies have to rethink how they train, govern, and manage their AI systems.
Failure to embed AI literacy into a structured governance model will not only put organizations at compliance risk – it could expose them to reputational damage, user distrust, and operational inefficiencies.
For more on this topic, read our AI Law Journal. Click here to see the latest issues.
Author: Giulio Coraggio
Data Protection and Cybersecurity
New recommendations on microtransactions and the use of virtual currencies in video games
Introduction
The Consumer Protection Cooperation Network of the European Commission has published new guidelines on the use of virtual currencies in video games (Guidelines). They set out a series of principles aimed at regulating microtransactions and offering virtual currencies in video games. Although not legally binding, the Guidelines provide essential guidance for companies in the video game sector.
Microtransactions and virtual currencies
Microtransactions (ie in-game purchases) are typically low-value payment operations carried out within video games. These operations are performed in exchange for digital content such as skins, items, power-ups, or for purchasing virtual currencies to be used for further purchases. Although generally limited in value, these transactions can quickly accumulate and result in significant expenditure, especially when the player is a potentially vulnerable subject.
The mechanism of microtransactions often relies on the use of virtual currencies purchasable with real money, which can obscure the actual cost of digital items. Certain features, such as “lootboxes”, where the outcome of the purchase is uncertain, resemble gambling mechanics. For these reasons, microtransactions and virtual currencies are receiving increasing attention from European authorities, with the aim of ensuring full compliance with consumer protection regulations.
Guidelines for companies: Key principles and operational measures
The CPC Network has identified seven key principles to guide developers and video game companies in offering virtual currencies, with the objective of complying with EU law and avoiding misleading or unfair practices.
- Price transparency: The price of digital content must be clearly expressed in real-world currency, even when the purchase is made using virtual currencies. Using virtual currencies – by adding a layer of abstraction between the act of payment and the actual expense – makes it more difficult for consumers to understand the real amount associated with a given purchase made using virtual currencies. In this regard, measures such as always indicating the real-world price alongside the virtual currency amount can enhance transparency and ensure that consumers fully understand the actual purchase value of in-game products.
- Prohibiting practices that obscure costs: The Guidelines emphasize the need to avoid complex structures involving multiple layers of virtual currencies or conversions that make it difficult to calculate the real value of purchases. For instance, offering an excessive number of different types of virtual currencies in a single game (eg “gems”, “gold,” “tokens”) may be considered a confusing mechanism that makes it difficult to determine the final monetary equivalent of the in-game purchase.
- Purchase of undesired currency: It’s necessary to prevent virtual currency bundles that require the consumer to spend more than what is actually needed to buy specific content. For instance, offering virtual currency only in packages (eg 500 coins) when an item costs 450 coins – leaving unused “leftovers” and not allowing the player to select the exact amount of currency to purchase, may be considered techniques aimed at “forcing” the consumer to spend more than they actually intended to buy that specific content.
- Clear pre-contractual information: The Guidelines stress the need to provide all relevant information prior to purchase (eg content features, consumer rights, payment methods, and withdrawal rights). These obligations apply to both purchases made with real money and those made using virtual currencies.
- Respect for the right of withdrawal: The Guidelines highlight the need to ensure players have the right of withdrawal for in-game purchases, including those made with virtual currencies. It’s essential to guarantee the right of withdrawal for any unused virtual currency and not to restrict this right.
- Contracts and unfair clauses: In accordance with consumer law, contractual terms must be drafted in clear language and must not contain clauses that limit consumer rights or allow the seller to make unjustified unilateral changes. It’s necessary to prevent clauses that create imbalances between the rights and obligations of consumers and developers.
- Respect for consumer vulnerability: The Guidelines highlight the importance of avoiding practices that exploit the vulnerability of certain groups, such as minors or users willing to spend large amounts (whales). Also, videogames operators should adopt mechanisms to protect vulnerable groups (eg minors), including implementing parental control systems and effective protection mechanisms. From a marketing/advertising perspective, the Guidelines refer to the need to avoid designing advertisements that directly encourage children to make purchases.
Conclusion
The Guidelines published by the Consumer Protection Cooperation Network aim to offer a clear framework for the future regulation of microtransactions and virtual currencies in video games. Although non-binding, these recommendations largely represent a specification and exemplification of existing consumer protection legislation in the video game sector, while also serving as a clear indication of regulatory expectations regarding commercial practices adopted in video games. Companies in the sector have to carefully consider the matter and implement microtransaction systems and virtual currencies that comply with applicable consumer law.
Author: Federico Toscani
Intellectual Property
Influencer marketing: IAP publishes 2024 Annual Report and Almed Report
On April 29, 2025, the Italian Advertising Self Regulatory Authority, Istituto dell’Autodisciplina Pubblicitaria (IAP) published its Annual Report on activities carried out in 2024 (Annual Report). Around the same time, IAP also released the 2024 Report developed in collaboration with Almed (Alta Scuola in Media, Comunicazione e Spettacolo) of the Università Cattolica del Sacro Cuore (Almed Report), focusing on monitoring transparency in influencer marketing. The objective was to assess the level of transparency in influencer communications in Italy.
According to the Annual Report, in 2024, IAP issued 103 prior opinions, providing its feedback within just one day in nearly 80% of cases. The product sectors requesting the highest number of prior evaluations were: finance and insurance (26%), food and beverage (16%), and electronics and telecommunications (14%).
The Annual Report also shows that IAP examined 238 cases in 2024, an increase compared to 2023. Of these, 202 cases were resolved quickly – either because the advertiser amended the advertising message following the request of the Review Committee, or because the case was closed because the Code wasn’t violated. IAP issued 21 injunctions, requiring advertisers to cease disseminating messages deemed non-compliant with the Code. IAP also delivered 15 official rulings during the year.
Overall, the Annual Report indicates an increase in preventive action regarding advertising, a decrease in sanctions, and the continuing relevance of influencer marketing, especially concerning transparency and the correct disclosure of promotional content, pending the upcoming publication of the Code of Conduct.
An additional key insight concerning influencer marketing comes from the transparency monitoring conducted as part of the Almed Report. This study analyzed the activity of 333 Italian influencers over a six-month period across the fashion, beauty, family, and finance sectors, to assess how transparently they communicated promotional content. From a total of 144,831 posts published on Instagram, TikTok, and YouTube during the monitoring period, over 8,000 advertising-related posts containing explicit brand references were selected for review.
The data reveals an overall positive trend: 76% of the analyzed content was considered compliant with transparency regulations, 20% was partially compliant, and only 4% was found to be non-compliant, with no disclosure of promotional intent. Instagram had the highest percentage of compliant content, followed by YouTube, while TikTok showed the most uncertainty in terms of transparency, with higher percentages of both non-compliant content (6.1%) and partially compliant content (27.8%) compared to the other platforms.
The monitoring also revealed that certain formats present greater challenges for correctly disclosing advertising. Specifically, Instagram Stories and TikTok videos emerged as the most problematic formats for proper advertising disclosure. These high percentages are likely due to the widespread use of these formats among influencers, as well as possible technical or expressive difficulties in adapting them to current regulations.
The Almed Report further highlights significant differences among the monitored sectors:
- Fashion: Advertising is generally transparent, with 90% of content deemed compliant. But Instagram stands out as the most problematic platform in this sector, with 18.6% of content lacking transparency and 49.2% being only partially compliant.
- Beauty: This sector shows only 70% of compliant content, with a notable gray area – 6% of content was partially compliant, especially in short video formats on TikTok and Instagram. The primary issue is the improper or unclear display of the #adv label.
- Finance and family: These are the most critical sectors. In particular, more than 10% of promotional content shared by family influencers was non-transparent, lacking any explicit indication of advertising. This may be due to the fact that many family influencers – often micro-influencers with fewer than 50,000 followers – have not yet fully adopted advertising transparency practices, partly out of concern for damaging the trust-based relationship with their audience.
The Almed Report highlights that although transparency in influencer marketing in Italy is generally improving, specific challenges remain – especially in short-form content and in emerging sectors. While the data reveals widespread adoption of transparent communication practices among influencers, it also points to the presence of incorrect or only partially correct disclosures, such as when advertising is referenced but not in a way that complies with the IAP’s Digital Chart regulation. These gray areas, where the commercial nature of content isn’t properly disclosed, underscore the need for further training – particularly for younger influencers or those active in emerging segments – to ensure the adoption of fully correct and transparent communication practices.
Author: Carolina Battistella
‘Human Authored’: The Authors Guild’s initiative for transparency when using AI
In the US, in response to the growing use of AI, the Authors Guild, the organization that represents writers, authors, and translators, has recently launched “Human Authored”, an initiative aimed at ensuring greater transparency in the relationship between AI, authors, and readers.
Through the portal, Authors Guild members can register their literary works, formally declaring that they originate from human intellect and are not produced by algorithms or generative AI systems. Works that meet these criteria may display the official “Human Authored” logo, which has been submitted for trademark registration, on book covers and promotional materials.
According to representatives of the organization, the “Human Authored” program is intended to promote transparency, enabling readers to clearly recognize the creative origin of a work. “Human Authored” means that the book’s text was created and written by a human author and not generated by AI. However, the use of new technological tools, including AI instruments, isn’t completely restricted: to qualify to use the logo, only a minimal and marginal portion of the text can be generated or modified by AI, which can be employed for trivial tasks such as spellchecking, grammar correction, brainstorming, or research. The organization provides a general benchmark for evaluating minimal use: whether a human editor making the same changes as the AI system would have a potential claim to rights in the work, in the absence of a specific contract defining the scope of their involvement.
Authors participating in the “Human Authored” initiative have to sign a license agreement for each registered title, declaring and warranting that the text is human authored, meaning all but a marginal portion of the text was written by a person and not by AI. Measures have been implemented to prevent misuse of the registration and its corresponding logo, including assigning a unique registration number to each registered title, helping to deter unauthorized or fraudulent use of the “Human Authored” logo. Currently, the portal is accessible only to Authors Guild members, but the organization has announced plans to open “Human Authored” to unaffiliated writers in the future. The broader goal is to expand the culture of transparency in AI usage across the global publishing landscape.
The initiative launched by the Authors Guild highlights how the use of AI systems in creative processes is currently one of the most debated topics, especially in relation to the legal protection of outputs under copyright law. For works to be protected, many national regulations require human contribution. For example, in Italy, Law No. 633/1941 refers explicitly to “intellectual work”, requiring an original human input. A copyright-protected work has to be the original expression of the author’s individual creativity. In this context, transparency in the creative process, specifically the extent to which AI tools have been used, becomes a key factor not only for ethical and reputational reasons but also for the actual assertion of rights over a work.
The Human Authored project addresses the growing attention of readers and users to transparency regarding content generated entirely or in part with the help of AI systems. For this reason, major social media platforms are gradually introducing guidelines to flag content that has been generated or modified using AI, in response to growing user demands and concerns about the risks associated with misinformation and the manipulation of content available online.
These initiatives reinforce the central importance of guaranteeing the authenticity of a content’s “human” origin. Through explicit declarations about the use (or non-use) of AI, authors and creators can preserve public trust and respond to the collective demand for transparency concerning the origins of creative works and digital content.
Author: Chiara D’Onofrio
Technology Media and Telecommunication
EC launches public consultation on International Digital Strategy
On May 7, 2025, the European Commission launched a public consultation to gather input ahead of the adoption – scheduled for June 2025 – of a Joint Communication on the International Digital Strategy.
The initiative, jointly promoted by the Directorate-General for Communications Networks, Content and Technology of the European Commission (DG Connect) and the European External Action Service (EEAS), follows the request made last year by the European Council to the Commission and the High Representative of the Union for Foreign Affairs and Security Policy to present a joint communication on strengthening the EU’s leadership in global digital affairs. The objective is to keep pace with the “global tech race” and to avert potential risks the EU might otherwise face in terms of competitiveness.
The consultation document notes that, in the current global landscape, digital technologies play an increasingly significant role in international security issues across various areas (eg cybersecurity). In this context, “tech competitiveness” is particularly important. To ensure it, international tech cooperation and trade with key partners and allies are necessary, along with proper diversification and risk mitigation policies.
With regard to the objectives pursued by the initiative, the consultation document states that the Joint Communication on the EU’s International Digital Strategy aims to define a strategy to step up internal and external actions to foster the EU’s technological sovereignty, democracy, and security, acting in close coordination with the member states and EU tech companies.
The action planned under the Joint Communication will focus on:
- leveraging digital cooperation with partner countries and reinforcing the existing network of digital partnerships and alliances to boost the EU’s tech competitiveness and sovereignty, in line with the objectives of the Competitiveness Compass (ie a strategic plan of the European Commission aimed at boosting the EU economy and strengthening its global role in innovation and sustainability);
- concrete actions in international cooperation, including in the fields of technologies such as AI and quantum technologies, as well as in cybersecurity and secure connectivity;
- building an integrated offer of European tech solutions to international partners as part of the Global Gateway (ie an EU strategy mainly aimed at building sustainable and reliable connections), involving EU tech companies and innovators to support the digital transformation of partner countries;
- improving the coordination of unified EU positions in multilateral and plurilateral fora;
- using digital diplomacy tools to strengthen the EU’s engagement with partner countries.
Stakeholders interested in the public consultation can submit contributions until May 21, 2025.
Authors: Massimo D’Andrea, Flaminia Perna, Matilde Losa
Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo Bardelli, Carolina Battistella, Carlotta Busani, Noemi Canova, Gabriele Cattaneo, Maria Rita Cormaci, Camila Crisci, Cristina Criscuoli, Tamara D’Angeli, Chiara D’Onofrio, Federico Maria Di Vizio, Enila Elezi, Nadia Feola, Laura Gastaldi, Vincenzo Giuffré, Nicola Landolfi, Giacomo Lusardi, Valentina Mazza, Lara Mastrangelo, Maria Chiara Meneghetti, Deborah Paracchini, Maria Vittoria Pessina, Marianna Riedo, Tommaso Ricci, Rebecca Rossi, Roxana Smeria, Massimiliano Tiberio, Federico Toscani, Giulia Zappaterra, Enila Elezi.
Articles concerning Telecommunications are curated by Massimo D’Andrea, Flaminia Perna, Matilde Losa and Arianna Porretti.
For further information on the topics covered, please contact the partners Giulio Coraggio, Marco de Morpurgo, Gualtiero Dragotti, Alessandro Ferrari, Roberto Valenti, Elena Varese, Alessandro Boso Caretta, Ginevra Righini.
Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.
You can learn more about “Transfer”, the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as Diritto Intelligente, a monthly magazine dedicated to AI, here.
If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.