DLA Piper Algorithm to Advantage

18 February 2026

“I, Robot” and “I, Consumer”: Antitrust and unfair commercial practices risks in consumer robots

Isaac Asimov’s series of short stories under the title I, Robot introduced the world to his Three Laws of Robotics and the character Dr Susan Calvin, a robopsychologist called on when a robot’s behaviour became questionable or harmful.

Consumer robots are now moving from novelty to mainstream – ranging from robots performing simple households tasks to human-size voice-integrated companion devices – and are increasingly embedded in wider digital ecosystems. As these devices integrate with voice assistants and other access points controlled by large platforms, their legal risk profile expands, including under EU competition law and EU unfair commercial practices rules.

Below, we look at key risk vectors for consumer robots under both the prohibition of abuse of dominance (Article 102 TFEU) and the EU unfair commercial practices rules (Directive 2005/29/EC, UCPD), with brief references to the Digital Markets Act (DMA) and the Digital Services Act (DSA).

Abuse of dominance and DMA: self-preferencing, tying, and interoperability risks

The EU Courts have confirmed, in the Google Shopping judgment, that self-preferencing by a dominant undertaking may constitute an abuse where, in its market context, it is capable of producing exclusionary effects. In consumer IoT markets, the European Commission has also identified competition concerns around voice assistants and related ecosystem control. Separately, the DMA imposes ex ante obligations on designated gatekeepers in relation to their “core platform services,” including restrictions on self-preferencing and certain forms of tying, and obligations designed to facilitate interoperability.

Accordingly, the key antitrust-related compliance questions are:

  • Does the undertaking manufacturing the consumer robot have a dominant position on a properly defined relevant market?
  • Has the undertaking been designated as a gatekeeper for any relevant core platform service?
  • In case of recommendations or rankings (for example, when a robot is asked to state which vacuum cleaner works best or to suggest a new car for the family): are these influenced by ownership links or other incentives that may distort neutrality?

If the answer to any of the above questions is yes, additional safeguards are typically required to mitigate allegations of tying/bundling, discriminatory ranking, or exclusionary leveraging.

Tying/bundling: where the robot is commercially linked to a gatekeeper’s core platform service, the DMA may restrict making access to the robot (or essential functionality) conditional on the use of the gatekeeper’s payment, identification, marketplace, or other core services. One risk scenario is pre-installation of key software combined with the impossibility of uninstalling or replacing it. Another scenario is offering the robot at a commercially attractive price only if the user activates the gatekeeper’s payment or identification services, steering users to the gatekeeper’s own ecosystem.

Self-preferencing: where the operator is dominant (or subject to DMA obligations), it should assess the design of ranking, defaults and recommendation logic to ensure it does not treat its own products or services more favourably than those of rivals when operating the relevant interface. Examples could include prioritising the robot manufacturer’s own marketplace listings in response to the consumer's product availability searches, or systematically elevating the robot manufacturer's own proprietary services within the robot’s interface while degrading access to competing services. A robot manufacturer could also use behavioural, environmental or usage data collected by the robot to favour its own downstream services – for example, its own marketplace or advertising inventory – while denying equivalent data access to competitors.

Interoperability and data access: interoperability and access to APIs or data can be particularly sensitive in robot ecosystems, especially where a single undertaking supplies the robot hardware, the operating system, the voice assistant, and adjacent connected devices. The Commission’s Consumer IoT inquiry highlighted concerns about proprietary standards, restricted APIs, and data concentration that may enable leveraging across markets. For example, a robot manufacturer may wish to introduce a closed walled-garden design, providing both hardware and software for its robots as well as various connected appliances, such as vacuum cleaners, fridges, household security or lighting. Such designs should clearly be pre-assessed against DMA interoperability/access obligations (if applicable) and abuse of dominance risks where refusal or limitation of interoperability may foreclose access to a competitor's rival product or service.

Unfair commercial practices: transparency, disclosure and manipulation risks

Under the UCPD, misleading actions and misleading omissions are both prohibited. Misleading actions entail situations when a company makes a clearly untruthful statement, while misleading omissions mean a situation when the average consumer cannot take an informed transactional decision without the material information, which was not disclosed to the consumer.

In this context, the UCPD also prohibits so-called dark patterns: deceptive or manipulative interface design practices that steer, pressure, or trick consumers into decisions that are not in their genuine interest.

Transparency: For consumer robots, “material information” may include:

  • relevant safety limitations, including foreseeable misuse and bystander/child risks
  • cybersecurity and connectivity constraints
  • any additional payments required for functionality that consumers would reasonably expect to be included in the advertised price

Further, where “full” or “normal” functionality depends on a subscription – such as a fee for robot operating software – the subscription cost should be disclosed clearly and prominently in advertising and at the point of sale. See the EU Court’s Canal Digital Danmark judgment, where a misleading omission was found where mandatory costs for a given service were omitted.

Disclosure: A further risk involves undisclosed conflicts of interest in product recommendations. If the robot or app recommends products or services because the provider receives remuneration – for example, affiliate fees or paid prominence – failure to disclose that influence clearly and in a timely manner may infringe the UCPD. In fact, the UCPD's blacklist specifically includes a situation where a paid ranking in response to a consumer search query is provided without disclosing the paid nature of this ranking. If the consumer robot operates via voice-based interactions, disclosures should be audible, proximate to the recommendation, and phrased in plain language (e.g., “These are sponsored results”), with non-sponsored alternatives readily accessible.

Manipulation: Finally, the UCPD prohibits manipulative practices, including the use of "dark patterns". Article 25 of the DSA also prohibits such practices in online platforms. Such manipulative "dark patterns" have been very much in the focus of enforcers in Europe – for example, in the form of sweeps by several national authorities against fake urgency and steering. See the European Commission and CPC sweep in January 2023, the German Bamberg Higher Regional Court's decision in the events booking case, and the Hungarian Competition Authority's decision in the hotel booking sector.

Although voice assistants have been available for some time now, robot interfaces entail a new method of communication for consumers, especially in view of their physical presence and – with humanoid robots – the possibility of a closer connection between machine and people. The threats and risks of such possible connections and of anthropomorphising robots is a common theme of science fiction pieces. This closer connection entails a greater risk in terms of emotional attachment and undue influence, as users may ascribe human-like intentions or trustworthiness to systems that are actually designed and optimised for certain commercial objectives.

As a result, the assessment of commercial communication by consumer robots under the UCPD must take into account the broader implications of behavioural manipulation and consumer vulnerability.

Recommendations

Companies that design, produce and deploy consumer robots clearly do not need to employ a robopsychologist like Susan Calvin to avoid causing (economic) consumer harm by their robots. But they do need compliance-by-design processes that embed EU competition and consumer protection considerations early in product development and deployment.

Practical steps include:

  • Market and regime mapping: identify the relevant markets and any potential dominance or DMA designation issues for interfaces, operating systems, voice assistants, marketplaces, payment/ID services and data assets.
  • Design governance: audit and pre-assess ranking, recommendation logic, defaults and access conditions to prevent discriminatory outcomes and to mitigate tying/bundling risk.
  • Interoperability assessment: document the rationale for any closed ecosystem choices; assess whether interoperability/API/data access limitations could create foreclosure risk, and whether DMA interoperability duties apply.
  • Consumer transparency: communicate any safety limitations, cybersecurity/connectivity constraints and any paid features necessary for performance in a clear and perceptible manner for consumers.
  • Sponsored content controls: label paid prominence at the point of recommendation (voice and visual surfaces) and ensure that non-sponsored alternatives are easy to find.
  • Dispute readiness: prepare for challenges by consumer organisations, nascent competitors and third parties seeking access.
Print