DLA Piper Algorithm To Advantage

16 February 2026

Can GenAI providers bear antitrust liability under EU law?

OpenAI's launch of ChatGPT in November 2022 transformed the way humankind views machine-based intelligence. Competition is central to this brave new world, with older and newer players striving for the attention of consumers on various market levels.

As such, compliance with antitrust laws is essential, including by providers of GenAI systems – in simple terms, those who develop and put to market such systems, and as defined in more technical terms by Article 3(3) of the EU AI Act.

The European Commission and national competition authorities are focusing on AI providers' compliance with competition law. See, for example, the Commission's December 2025 antitrust investigation into possible anticompetitive conduct by Google in the use of online content for AI purposes. Or the Commission's earlier Competition Policy Brief on Generative AI.

Consequences for infringements can be serious, ranging from large fines imposed by competition authorities to private damages claims by injured parties.

Breach of antitrust laws can take multiple forms

GenAI models could act as vehicles for anti-competitive collusion among commercial users active in the same market – for example, through the exchange of commercially sensitive information.

In the more general AI context, algorithmic collusion has been a hot topic, with several recent cases in the US involving the apartment rental and property management market.

Models could also be used to create anti-competitive content or behaviour. For example, to draft a cartel agreement, or to find a more effective – or secretive – way to implement an abuse of a dominant position.

Or models could be used as a tool to mask anti-competitive behaviour – for example, by providing advice on how to evade antitrust investigators in a jurisdiction.

In these examples of anti-competitive conduct, one of the most important aspects is liability. In cases where GenAI systems autonomously – that is, without human control – develop strategies that may be considered collusive, it becomes challenging to determine the locus of intent and control.

"The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom," wrote Isaac Asimov.

We will see whether the various antitrust and regulatory rules – and their enforcers – have already gathered sufficient wisdom to tackle the new issues posed by computer science and GenAI tools.

To take one more example, a key question is whether a GenAI provider has a positive obligation to implement guardrails and therefore filter anti-competitive conduct – similar to filtering hate speech or other inappropriate content.

Can GenAI providers be liable under EU competition law?

From a general EU antitrust law perspective, liability for anti-competitive agreements can extend beyond direct participants to entities that knowingly contribute to or enable such conduct.

The EU courts and the European Commission have previously recognised accessory liability for facilitators of cartels, even where they are not active in the relevant market.

If a GenAI system is deployed to coordinate pricing, allocate markets, or exchange sensitive information between competitors, and the provider is aware or should reasonably be aware of such use, the provider could be deemed to have participated in or facilitated an infringement.

Also, due to the public policy nature of EU competition law, such personal and objective liability cannot, in general, be effectively limited or excluded via contractual arrangements – for example, between the provider and a user of the GenAI tool.

The issue of "awareness" is difficult to establish in the case of the provider of a GenAI tool, especially given its black box, opaque nature – the difficulty in discerning why machine learning models produce the outputs they do.

Even so, we can set out some factors that would point to awareness:

  • Where the provider markets the tool as being capable of anti-competitive conduct. For example, advertising the tool as useful for traders to achieve a significant price increase in a market.
  • Where the provider concludes license agreements with users, from which an anti-competitive purpose or intent is apparent. For example, where a GenAI tool is licensed to several direct competitors with the aim of allowing the secret coordination of their market behaviour.
  • Where the provider receives internal or external complaints regarding anti-competitive conduct and fails to act on such complaints.
  • Where internal documentation shows that the provider was aware of the risk of such an improper use but – for whatever reason – failed to take appropriate steps.

What are the competition law implications in the EU AI Act?

Competition law obligations are also apparent from an EU regulatory perspective, though so far only to a limited extent.

The EU AI Act sets out various obligations applicable for "general purpose AI models," and additional requirements for models classified as posing a "systemic risk."

The following are certain focus points that can already be identified for providers of so-called general purpose AI models.

These providers are obliged to draw up technical documentation of the model and keep it up to date. This documentation must include the “acceptable use policies applicable” – though this obligation does not apply to open-source models unless the model entails a systemic risk.

It is likely such a policy may include a proviso that contributing to, facilitating or participating in anti-competitive conduct is not included in the scope of “acceptable use” by the GenAI model.

At the very least, this should entail, for example, that the model could not be applied towards anti-competitive uses – for example, as noted above, to draft a cartel agreement among competitors, or to gather advice on how to evade an inspection by a competition authority.

One could argue that noncompliance with EU competition law is also a systemic risk under the EU AI Act, in which Article 3(65) defines it as:

“A risk that is specific to the high-impact capabilities of general purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”

Indeed, breaches of EU competition law could have a significant effect on the Union market and negative effects on EU society as a whole. This could be from actual or potential harm to the collective interests of consumers, competitors, or the market structure itself.

The EU Commission's Code of Practice for General-Purpose AI Models includes, in its Safety and Security chapter, a documentation prompt on model propensities that expressly mentions the possibility of colluding with other AI models or systems.

This shows that possible anti-competitive arrangements among AI models have already been taken into account as a relevant feature of the model.

In light of this, it could also be argued that large GenAI companies – for example, providers of general-purpose GenAI with systemic risk – should include competition law risk assessment as part of their relevant regulatory risk assessment obligations.

If this is correct, then GenAI developers should:

  • assess and mitigate any possible breaches of competition law as a relevant risk; and
  • document and report incidents involving serious competition law breaches to the European AI Office and, potentially, to the national competent competition authorities – or rely on codes of practice to this effect.

An interesting question here is whether a GenAI provider could, in the event of a serious breach, be obliged to notify the European Commission or any national competition authority.

Points GenAI providers should consider

The competition law liability landscape for GenAI providers is complex and will require considerable wisdom from both market participants and regulators.

GenAI providers should:

  • be aware of their potential exposure to EU competition law and take this into account when developing and releasing systems.
  • consider specifically excluding anti-competitive conduct from the scope of acceptable use policies.
  • consider engaging in a targeted risk assessment exercise to test whether the GenAI model could be involved in anti-competitive conduct – for example, in drafting anti-competitive agreements or serving as a vehicle for collusion – and to devise appropriate guardrails.
DLA Piper Algorithm To Advantage

Algorithm To Advantage

Visit our hub for our latest thinking across the full AI lifecycle. We identify the forces shaping AI, analyse the emerging risks and opportunities, and break down solutions.
Find out more
Print