4 August 202527 minute read

Innovation Law Insights

4 August 2025
Artificial Intelligence

Using AI to commit insurance fraud

AI has many benefits. It can improve efficiency, help make better decisions, and encourage innovation across different industries. But these advantages also come with serious risks - especially the potential for misuse in fraud or deception.

Like any powerful technology, AI can be used for both helpful and harmful purposes. This makes strong and thoughtful governance essential to maximize its benefits and protect against misuse.

Roberto Copia, Director at IVASS Inspectorate Service, spoke about this issue at the 4th National Congress of the CODICI Association on 17 May 2025. He pointed out a growing ethical concern: while AI can improve the efficiency of the insurance industry, it can also give fraudsters more advanced tools to commit fraud.

AI and Insurance: An inseparable alliance

AI has become an indispensable tool in the insurance sector. Its applications range from risk assessment to product development, claims management and fraud prevention. Predictive algorithms, neural networks, and machine learning models allow the processing of vast datasets, improving underwriting accuracy, accelerating claim settlements and strengthening insurers’ anti-fraud capabilities.

But these very tools - powerful, scalable and increasingly accessible - are also being weaponized by fraudsters. As Copia noted, “Those who seek to commit fraud are often skilled innovators - frequently one step ahead of those tasked with stopping them”.

Insurance fraud in the age of AI: A quantum leap in criminal sophistication

Insurance fraud has always been a structural problem in the sector. Yet today, it’s undergoing a qualitative shift. We’re no longer dealing solely with fraudulent damage to property (Article 642 of the Italian Criminal Code) or fictitious claims. Modern fraud is digital, automated and highly sophisticated. AI has become a powerful enabler for those seeking to manipulate data, forge documents or create false digital identities.

A paradigmatic example is the Ghost Broker scam: websites that appear legitimate, often employing advanced social engineering techniques, real logos, and data stolen from unwitting intermediaries. AI allows these fraudulent portals to appear increasingly credible, complete with chatbots simulating customer service, AI-driven profiling of potential victims, and the delivery of highly personalized fake offers. The result is a seemingly flawless customer journey. But the buyer is left uninsured and unknowingly defrauded until a roadside inspection reveals the deception.

The cost of fraud: More than just financial

The statistics reveal a concerning trend. Forbes estimates global insurance fraud costs around USD300 billion annually.

In Italy, the situation is equally concerning: according to IVASS’ 2023 Anti-Fraud Report, the share of suspected fraudulent claims in the motor liability sector rose sharply from 16% to 21% in just one year.

But the impact of insurance fraud extends far beyond the immediate financial losses suffered by insurers. At a systemic level, the consequences are even more troubling. Public safety is compromised when uninsured vehicles remain on the roads, placing third parties at risk, who must then turn to the Guarantee Fund (Fondo di Garanzia per le Vittime della Strada), which is financed by all law-abiding, insured drivers.

Reputational damage is another major concern, as legitimate insurers and intermediaries see public trust eroded, particularly in cases involving identity theft, an offence addressed under Article 494 of the Italian Criminal Code.

The growing incidence of fraud undermines confidence in digital channels, slowing the industry’s transition to more efficient and accessible online services. And consumers themselves aren’t spared: many of the victims are young, digitally literate individuals who, despite their digital skills, often lack a clear understanding of how insurance mechanisms work, making them particularly vulnerable to scams.

The role of the AI Act and the institutional response

To address these emerging challenges, the EU has adopted the Regulation (EU) 2024/1689 (AI Act), a cornerstone in the legal governance of AI in the internal market. While its primary aim is to establish a risk-based framework ensuring safety and fundamental rights, the AI Act also has relevant implications for the insurance sector, particularly in relation to fraud prevention and risk management.

Some AI systems, such as those used for biometric identification, credit scoring, or emotion recognition, are classified as high-risk and must comply with strict requirements, including risk assessment, transparency, traceability, and human oversight. Although “AI systems used for the purpose of detecting financial fraud” are not classified as high-risk under the AI Act, they may be considered high-risk when combined with other high-risk systems or when they include key high-risk features. This opens the door to closer scrutiny of both the AI tools used by insurers and those misused by fraudsters.

The Regulation also introduces new obligations for AI systems with transparency risks, especially those capable of generating synthetic content, deepfakes, or convincingly imitating real individuals - frequent features in ghost broker scams and identity theft schemes. Providers and deployers of such systems now have to ensure transparency, watermarking, and documentation, enhancing the traceability of malicious uses.

These provisions, combined with sector-specific laws (eg the Digital Services Act (DSA) and the GDPR), may serve as legal levers to hold developers and operators accountable when their technologies are repurposed for fraudulent ends. More importantly, the AI Act places a duty on member states and competent authorities to monitor the use of high-risk systems and to facilitate coordination at EU level.

This provides an opportunity for national regulators such as IVASS to play a proactive role not only in supervising industry compliance but also in reporting and investigating abuses involving AI in the insurance domain.

IVASS, in cooperation with law enforcement authorities, has introduced a range of countermeasures to combat digital insurance fraud. These include the takedown of fraudulent websites, public awareness campaigns, network analysis, centralized claims databases, the creation of white lists, and the use of AI tools to identify suspicious patterns. Despite these efforts, institutional responses often struggle to keep pace with the rapid evolution of fraud techniques. At the same time, many insurers are investing in anomaly-detection systems and cross-verification technologies. Yet, criminal complaints remain limited, and fraud continues to be viewed - too often - as an unavoidable cost of doing business.

Towards a new paradigm: Accountability, awareness and law

Addressing this challenge requires both technical solutions and a cultural shift. Legal frameworks has to catch up with the rapid rise of generative AI, especially when it’s used for criminal purposes like counterfeiting, fraud or identity theft. Moving forward, several key actions are critical:

  • Automated accountability mechanisms: AI systems must not only detect anomalies but also certify and document fraudulent activity to support legal proceedings.
  • Enhanced public-private collaboration: stronger data sharing and joint risk analyses between regulators and private actors are needed.
  • Mandatory consumer education: like in the banking sector, consumers should receive mandatory training on the safe and informed use of digital insurance platforms.
  • A harmonized EU framework: a cohesive European approach to ghost broking and digital insurance fraud is required, integrating the objectives of the DSA with sectoral supervisory efforts.

Author: Giacomo Lusardi

 

Blockchain and Cryptocurrency

Is the Genius Act that smart? Stablecoins, Sovereignty, and the Global Monetary Divide

As stablecoins evolve from speculative tools into pillars of financial infrastructure, they are increasingly caught in the crossfire between regulatory philosophies and monetary geopolitics. In this shifting landscape, the United States and the European Union offer two sharply divergent responses.

The Guiding and Establishing National Innovation for U.S. Stablecoins Act (hereinafter, Genius Act), signed into law on the 18th of July 2025, by the United States of America (USA) President, enshrines a deregulatory approach that encourages private dollar-based innovation on a global scale, a vision that the European Parliament has provocatively labelled cryptomercantilism.

By contrast, the European Union (EU) Regulation 2023/1114 giving an harmonized framework on Markets in Crypto-Assets (MiCAR) marks a defensive yet strategic assertion of monetary sovereignty, embedding digital assets within public-law constraints, supervisory thresholds, and central bank oversight.

Drawing on EP760.274/2025 European Parliament study (the 2025 EP Study) and a range of transatlantic analyses, this article compares the regulatory architectures of Genius Act and MiCAR, their treatment of stablecoin issuers, reserve regimes, supervisory powers and ultimately, their opposing visions of digital money.

  1. Cryptomercantilism: The Dollar’s Digital Offensive

In the new architecture of digital finance, monetary sovereignty increasingly hinges on states’ ability to shape, or resist, cross-border infrastructures of money. The United States has embraced an assertive strategy known as “cryptomercantilism”, a term coined by the European Parliament to describe the promotion of dollar-backed stablecoins as tools of geopolitical expansion.

Evoking classical mercantilism, where power stemmed from export and reserve accumulation, its digital analogue replaces goods with dollar-pegged tokens embedded in global payment networks. President Trump is manifestly encouraging the development of dollar-backed stablecoins worldwide, framing them as instruments to reinforce dollar dominance. Treasury officials confirmed that these coins would help preserve the dollar’s role as the world’s reserve currency.

This strategy has two aims.

  1. First, to embed the dollar in emerging digital economies, especially in regions with weak banking infrastructure, where USA-based crypto firms promote stablecoins as tools of inclusion.
  2. Second, to generate demand for USA debt: by requiring stablecoins to be backed by low-risk dollar assets, notably Treasuries, cryptomercantilism transforms private issuance into a public financial lever.

The Genius Act institutionalises this vision. It provides a permissive licensing regime for both banks and non-banks, allows reserve holdings in a wide array of dollar-denominated assets, and imposes no territorial constraints, thus enabling global circulation. US tech giants further boost this diffusion by embedding stablecoin payments into their ecosystems, offering incentives like cashbacks and loyalty points.

For the EU, this poses an existential challenge. As warned by the 2025 EP Study, the risk lies not only in domestic disintermediation but in the erosion of the Euro’s international role and the weakening of EU monetary governance. MiCAR, in this light, is not merely financial regulation: it is an act of monetary self-defence.

  1. MiCAR vs Genius Act: Diverging Architectures of Digital Money

The EU and the USA have responded to the rise of stablecoins with two fundamentally different regulatory models.

  1. MiCAR, in force since July 2024, adopts a public-law logic: it aims to contain systemic risks, preserve monetary sovereignty, and integrate stablecoins into a tightly supervised financial ecosystem.
  2. The Genius Act, enacted in July 2025, reflects a deregulatory ethos: it encourages private monetary innovation and reinforces the dollar’s global dominance.

As highlighted in Annex I of the European Parliament’s study, the two frameworks diverge along seven key dimensions:

  1. Issuer eligibility: MiCAR restricts issuance to credit or e-money institutions, while the Genius Act opens the market to non-bank entities under state or federal licences, including fintechs and tech giants.
  2. Redemption: Both require 1:1 fiat redemption. However, MiCAR allows supervisory intervention (e.g. redemption gates), whereas the Genius Act relies on post-failure mechanisms like bankruptcy prioritisation.
  3. Systemic designation: MiCAR identifies “significant” issuers and subjects them to stricter obligations. The Genius Act applies a one-size-fits-all approach, ignoring systemic risk stratification.
  4. Reserve assets: MiCAR demands high-quality, low-risk reserves with strict custody rules. The Genius Act permits broader asset types, increasing flexibility but raising exposure to liquidity or credit shocks.
  5. Supervision: MiCAR centralises oversight via the EBA and national authorities. The Genius Act fragments supervision between state and federal actors, with limited proactive enforcement.
  6. Extraterritoriality: MiCAR prohibits third-country stablecoins unless licensed in the EU. The Genius Act promotes cross-border equivalence and regulatory export, reinforcing the dollar’s reach.
  7. Monetary safeguards: MiCAR empowers the European Central Bank (ECB) to restrict non-Euro tokens threatening the Euro area. The Genius Act contains no such safeguards and implicitly supports global dollarisation.

These differences are more than technical. They reflect opposing visions: one seeks to contain the influence of private digital money; the other to project it globally. MiCAR is a model of regulatory containment. The Genius Act is a blueprint for monetary expansion.

  1. Systemic Risks and the European Monetary Firewall

Dollar-backed stablecoins do not just challenge regulatory frameworks, they have rather threaten to erode monetary sovereignty beyond USA borders. As underscored by the 2025 EP Study, these risks span both internal financial fragility and external monetary displacement.

I.Internal vulnerabilities

Even when fully collateralised, stablecoins remain vulnerable to market stress. Illiquid or opaque reserves can trigger speculative runs, while redemption promises may collapse under pressure, as past Tether and Circle depegging events have shown. Furthermore, reserve reallocations in search of yield, particularly from EU to USA assets, could fuel financial contagion and impair monetary transmission within the eurozone. As users shift funds into non-bank stablecoins, traditional banks may face liquidity constraints, undermining their lending capacity.

II. Cross-border dollarisation

Widespread use of USD-pegged stablecoins within the EU could lead to de facto digital dollarisation. This would expose users to exchange rate volatility, constrain the ECB’s policy autonomy, and weaken the euro’s international role. Moreover, unlicensed issuers from third countries create enforcement gaps and open the door to AML/CTF vulnerabilities, especially in the absence of global regulatory alignment.

III. The MiCAR response: pre-emptive monetary defence

MiCAR confronts these threats through a dual safeguard mechanism:

  • Quantitative thresholds, under Articles 23 and 58: Daily volumes above 1 million transactions or EUR200 million trigger supervisory intervention.
  • ECB discretionary power, under Article 24, paragraph 3: Regardless of metrics, the ECB may prohibit issuance of foreign-pegged tokens that threaten monetary stability.

In this light, MiCAR goes beyond prudential oversight. It acts as a monetary firewall, preserving the euro’s integrity in a digitised and dollarised world.

  1. From Regulatory Defence to Digital Monetary Sovereignty

Faced with the extraterritorial ambition of the Genius Act, the EU’s response extends well beyond MiCAR. It reflects a broader strategy to reassert digital monetary sovereignty - not by mimicking deregulatory models, but by combining public monetary innovation, regulatory consistency, and multilateral leadership.

I.The digital euro as sovereign anchor

At the core lies the digital euro: a central bank-issued currency designed to preserve public money in the face of private, dollar-based alternatives. Unlike stablecoins, it is a risk-free central bank liability, embedded in democratic oversight and privacy-preserving infrastructure. Crucially, it aims not to compete globally, but to ensure domestic monetary resilience — a “monetary anchor”, as described by ECB Board Member Fabio Panetta.

Integration with the European Blockchain Services Infrastructure (EBSI) further reinforces autonomy, fostering sovereign digital architecture decoupled from USA-dominated platforms.

II. Regulatory standards as strategic leverage

The EU has consistently refused to lower its standards to attract stablecoin issuers. MiCAR rejects third-country licensing shortcuts and enshrines high compliance thresholds. This stance is not defensive, but deliberate: it affirms that regulatory standards are vectors of sovereignty, allowing the EU to shape, rather than follow, global norms.

III. Multilateralism over regulatory competition

Finally, the EU advances its model through international forums such as the FSB and IOSCO. While open to equivalence, it conditions it on substantive convergence - especially on reserve backing, redemption rights, and supervisory powers. This multilateral strategy aligns with the EU’s broader shift toward strategic autonomy: from market liberalism to principled rulemaking.

In the stablecoin domain, as in AI or data governance, the EU’s ambition is clear: to define the rules of digital sovereignty not by scale, but by structure.

  1. Genius or Hubris? Reframing the Regulatory Divide

While often portrayed as parallel responses to a common challenge, MiCAR and the Genius Act embody two fundamentally opposing visions of the future of money. The Genius Act, in name and design, reflects a form of regulatory hubris: it assumes that private actors can safely issue dollar-denominated money at global scale with minimal oversight, transforming stablecoins into tools of digital mercantilism and geopolitical leverage.

MiCAR, by contrast, is not a mere financial statute; it is a monetary constitution. It asserts that money is a public infrastructure, one that must remain embedded in legal, institutional, and democratic frameworks. Innovation, in this vision, is welcome but only when it respects the logic of monetary order and the primacy of sovereign law over private code.

This divergence reflects more than regulatory philosophy. It reveals a geopolitical rift: between dollar hegemony and European digital sovereignty, between networks governed by market incentives and systems governed by public authority. The regulation of stablecoins is thus no longer a technical matter - it is a constitutional question for the digital age.

In resisting the deregulatory drift of the Genius Act, Europe signals its ambition to propose an alternative model of digital finance - one that privileges sovereignty over scale, law over code, and public stability over private expansion. Whether this vision will prevail remains uncertain. But in the contest for the future of money, genius alone is not enough. What matters is being sovereign and knowing exactly what that entails.

Author: Andrea Pantaleo & Giulio Napolitano

 

Data Protection and Cybersecurity

Protection of Minors Online: the Guidelines by the EU Commission

In July 2024, the European Commission adopted the Guidelines on the Protection of Minors Online (the Guidelines). The Guidelines, issued pursuant to article 28, paragraph 4, of Regulation (EU) 2022/2065 (Digital Services Act or DSA), set out the measures that providers of online platforms accessible to minors should implement to ensure a high level of privacy, safety and security for minors online. While not legally binding, the Guidelines represent a key reference point for providers, as they outline best practices aimed at supporting compliance with the obligations established by the DSA. As such, they constitute an important benchmark for all online platforms whose services or content may be accessed by minors.

Scope of Application

The Guidelines apply to all online platforms (e.g. social media networks, content-sharing services, and video-streaming platforms) which are accessible to minors.

According to the European Commission, a platform is considered accessible to minors not only when it is explicitly directed at them, but also whenever the provider is otherwise aware that some users of the service are minors. In particular, the Commission clarifies that a platform cannot be considered not accessible to minors solely by virtue of statements contained in its terms and conditions. In the absence of effective and verifiable age assurance mechanisms, the provider is presumed to have accepted the possibility that minors may access its services. For instance, the Commission considers that an online platform disseminating adult content falls will be considered as accessible to minors if it has not put in place effective safeguards to prevent minors from accessing its services

Key measures

  1. Risk Review

First of all, according to the European Commission, providers of online platforms should conduct a comprehensive risk review to determine how to implement the measures set forth by Article 28 of the Digital Services Act. This review - to be carried out periodically - should aim to identify the potential risks arising from minors’ use of the platform and guide the selection of the most appropriate mitigation measures.

To this end, the Commission encourages providers to refer to existing standards and tools for carrying out child rights impact assessments, such as UNICEF’s templates or those developed by the European standardization body CEN-CENELEC.

  1. Age Verification Measures

Providers must implement age assurance mechanisms that are reliable, proportionate to the level of risk, and preserve user privacy. The method selected must be based on a publicly accessible assessment of the risks associated with the service and its content.

In particular, the European Commission recommends the following approaches:

  • Age Verification should be used for access to high-risk features or content, such as pornography, gambling, live chat, image or video sharing, and anonymous messaging. Platforms should adopt trusted and verifiable systems, ideally based on government-issued IDs or digital identity wallets, that allow age confirmation without disclosing personal data to the platform itself. The Commission specifically suggests the adoption of double-blind verification methods, whereby the platform does not gain access to any user-identifying information, and the verification provider cannot link the user to the specific service being accessed. In any case, to further enhance both privacy and reliability, the European Commission is developing the so-called EU Digital Identity Wallet, which will allow users in participating Member States to prove their age without revealing their identity or other personal information.
  • Age Estimation may be appropriate for access to lower-risk content, where the risk level does not justify full verification but still requires some access control. This method should only be used where the risk review justifies its adequacy and effectiveness.
  1. Service and Interface Design

The design of online services should reflect a high standard of child protection. In particular:

  • Default to privacy-friendly settings: Platforms should automatically configure minor accounts to the highest levels of privacy, safety, and security (e.g. limiting the visibility of minors’ profiles and content, restricting interactions with users not explicitly approved, and disabling high-risk features by default, such as geolocation tracking and behavioural profiling).
  • Ease of reversion: Platforms must allow minors to easily revert any changes made to default privacy settings and ensure that clear warnings are presented when users attempt to reduce these protections.
  • Interface design choices: Interfaces must be structured in a way that allows minors to easily decide how to engage with the service. For instance, where AI features (such as chatbots, content filters, or generative tools) are integrated into platforms accessible to minors, these features must not be enabled by default. Minors should not be nudged or encouraged to use them.
  • Recommendation systems: Providers must pay particular attention to how recommendations algorithms work. According to the European Commission:
  • These systems should not rely on behavioural data collected outside the platform.
  • Platforms must implement safeguards to prevent repeated exposure to harmful or sensitive content.
  • Prevention of nudging: Platforms must avoid nudging minors into disclosing personal information or accepting settings that reduce their level of protection.
  1. Transparency and Moderation

Minors must be shielded from exposure to harmful content and manipulative practices. In this regard:

  • Harmful content: Providers should prevent exposure to hidden or disguised advertising, subliminal commercial techniques, and manipulative design strategies (such as dark patterns) that could lead to excessive spending, compulsive use of the service, or addictive behaviour.
  • Content moderation: Content moderation policies must explicitly cover threats to minors’ privacy, safety, and security. These should include, among others:
    - Human oversight in addition to automated moderation systems.
    - Adequate training and resources for moderation teams.
    - Continuous functionality and availability of moderation tools.
  • Use of AI Systems: Platforms must also adopt technical safeguards within AI systems to prevent the generation or dissemination of content harmful to minors. This includes integrating filters that detect and block prompts which the provider has identified in its moderation policies as risky for children’s privacy or safety.
  1. User Support and Reporting Tools

To effectively protect minors, platforms should offer accessible and comprehensive user support tools, including:

  • Reporting mechanisms: Providers must implement child-friendly systems that allow minors to report harmful content, contact, or behaviour. All users should also be able to report suspected underage accounts, especially when the platform sets a minimum age requirement.
  • Content feedback: Minors should be able to provide feedback on any content they encounter.
  • User support: Providers should offer clearly visible and easy-to-access tools for minors to seek help when exposed to suspicious, illegal, or inappropriate content or behaviour. This includes standard functionalities like blocking or muting other users.
  • Guardian tools: Providers must offer parental control tools that respect the rights and autonomy of children. These tools may include options for managing default settings, setting screen time limits, reviewing communications with other accounts, setting spending controls, and overseeing usage that may affect a minor’s privacy or safety.
  1. Governance and Accountability

Finally, platforms must integrate children’s rights and safety into their internal structures and governance policies. This includes, among others:

  • Appointing a specific person or team responsible for child safety.
  • Providing ongoing training to relevant staff on child protection and online risks.
  • Regularly documenting and publishing the outcomes of child-specific risk assessments, the measures adopted to mitigate those risks, as well as any updates to age assurance methods or safety tools.

Conclusions

The Guidelines offer a clear and structured framework for enhancing the privacy, safety, and security of minors online. While not legally binding, these measures reflect best practices that platforms are expected to adopt to demonstrate compliance with Article 28 of the Digital Services Act.

As such, online platforms accessible to minors are encouraged to use the Guidelines as a benchmark to critically assess their current level of compliance and to identify any necessary improvements or corrective actions to ensure that their protective measures are both adequate and effective.

Author: Federico Toscani

 

Intellectual Property

Influencers under scrutiny: AGCOM approves the Guidelines and Code of Conduct for a more transparent and accountable digital environment

By means of a public statement issued on 24 July 2025, the Board of the Italian Communications Regulatory Authority (AGCOM) announced the final approval of the Guidelines and Code of Conduct for Influencers.

This initiative is the outcome of a comprehensive regulatory process, initiated with a dedicated technical roundtable and further developed through a public consultation, as set forth by Resolution No. 472/24/CONS. The primary objective is to regulate a rapidly growing sector by ensuring that influencers, like traditional media operators, comply with the obligations laid down in the Consolidated Act on Audiovisual Media Services. Influencers are no longer considered merely content creators; those who produce and disseminate audiovisual material on social media platforms bear editorial responsibility for the content they publish.

The Code of Conduct, developed by AGCOM in collaboration with industry stakeholders, professionals, and influencer marketing intermediaries, aims to enhance transparency and recognizability of communications, while promoting responsible and ethical conduct toward the public.

The approved text sets forth clear criteria and behavioural standards to be observed by influencers, including fairness and impartiality in the dissemination of information, respect for human dignity, the prevention of hate speech, the protection of minors, and the safeguarding of copyright. Particular emphasis is placed on the transparency of sponsored content and commercial communications, which must be clearly identifiable, in line with the principles set forth by the “Digital Chart” regulation issued by the Italian Advertising Self-Regulatory Institute (IAP).

The new regulatory framework applies to so-called “relevant influencers”, defined as individuals who either have a minimum of 500,000 followers or reach an average of one million monthly views on at least one social media or video-sharing platform. These individuals will be included in a public register to be published on AGCOM’s official website. Such influencers will be subject to specific obligations relating to the transparency of advertising relationships, protection of vulnerable audiences, compliance with legal standards, and source verification. Oversight will be carried out through continuous monitoring and enforcement mechanisms. The applicable sanctions regime provides fines of up to EUR250,000 for non-compliance, which may increase to EUR600,000 in cases of serious violations, particularly where the protection of minors is compromised.

This regulatory development represents a concrete commitment toward fostering a safer, more transparent, and reliable digital ecosystem—for both citizens and the market. In a world where digital influence plays an increasingly impactful role, the key principle is accountability.

Author: Rebecca Rossi

 

Technology Media and Telecommunication

Consultation on the first review of the Digital Markets Act

On 3 July, the European Commission launched a public consultation on the first review of the Digital Markets Act (DMA), scheduled to take place by 3 May 2026.

The purpose of this consultation is to collect feedback and data from stakeholders in order to assess the effectiveness of the DMA in achieving its objectives aimed at ensuring contestable and fair digital markets. In particular, the Commission seeks input regarding the DMA’s capacity to address emerging challenges, including those arising from the introduction of AI-powered services. The Commission will utilize the comments received during the public consultation to prepare a report evaluating the impact and effectiveness of the DMA and to determine whether amendments and/or additions to the regulatory framework established by the DMA are necessary.

The resulting report from the public consultation will be submitted to the European Parliament, the Council, and the European Economic and Social Committee.

This public consultation is consistent with previous Commission initiatives, which, in 2023 and 2024, launched consultations to verify the achievement of the DMA’s objectives.

The consultation primarily targets business users (notably SMEs), end-users of digital services falling within the scope of the DMA, and associations representing such users.

Within the framework of this public consultation, the Commission intends to focus on four main aspects:

  1. Whether the objectives of the DMA have been met;
  2. The impact of the DMA on business users, particularly SMEs, and end-users;
  3. Whether the interoperability obligation - i.e., the obligation under Article 7 of the DMA requiring gatekeepers providing number-independent interpersonal communication services to make the basic functionalities of their services interoperable with those of another provider offering or intending to offer such services within the Union - should be extended to include online social networking services;
  4. Whether amendments to the DMA provisions, particularly those relating to the obligations imposed on gatekeepers, are necessary.

Below are some of the questions included in the public consultation, to which participants are invited to respond:

  • in relation to core platform services, whether participants have "any comments or observations on the current list of core platform services";
  • whether participants have "any comments or observations on the designation process [of gatekeepers] (e.g. quantitative and qualitative designations, and rebuttals) as outlined in the DMA, including on the applicable thresholds";
  • whether participants have "any comments or observations on the current list of obligations (notably Articles 5 to 7, 11, 14 and 15 DMA) that gatekeepers have to respect";
  • whether participants have "any comments or observations on the tools available to the Commission for enforcing the DMA" for example, whether they are suitable and effective;
  • whether participants have "any comments or observations on the DMA’s procedural framework (for instance, protection of confidential information, procedure for access to file)?

Interested parties wishing to participate in the public consultation must submit their contributions by 24 September 2025.

On a related matter, the article titled “ICA launches public consultation on the draft regulation on the forms of cooperation and coordination provided for implementing the Digital Markets Act” may be of interest.

Authors: Massimo D’Andrea, Flaminia Perna, Arianna Porretti

 


Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo BardelliCarolina BattistellaCarlotta Busani, Noemi CanovaGabriele Cattaneo, Maria Rita CormaciCamila CrisciCristina CriscuoliTamara D’AngeliChiara D’OnofrioFederico Maria Di Vizio, Enila EleziNadia FeolaLaura GastaldiVincenzo GiuffréNicola LandolfiGiacomo LusardiValentina MazzaLara MastrangeloMaria Chiara Meneghetti, Giulio Napolitano, Deborah ParacchiniMaria Vittoria PessinaMarianna Riedo, Tommaso RicciRebecca RossiRoxana SmeriaMassimiliano TiberioFederico Toscani, Giulia Zappaterra, Enila Elezi.

Articles concerning Telecommunications are curated by Massimo D’AndreaFlaminia Perna, Matilde Losa and Arianna Porretti.

For further information on the topics covered, please contact the partners Giulio CoraggioMarco de MorpurgoGualtiero DragottiAlessandro FerrariRoberto ValentiElena VareseAlessandro Boso CarettaGinevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer”, the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as Diritto Intelligente, a monthly magazine dedicated to AI, here.

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.

Print