
15 October 2025
European Commission seeks feedback on draft guidance on serious AI incidents
The European Commission (Commission) has published draft guidance and a reporting template addressing serious incidents under the AI Act, accompanied by a consultation to gather stakeholder feedback before finalization.
The guidance, published on September 26, 2025, is the latest set of advisory materials from the Commission, which is tasked with clarifying practical steps to comply with the provisions of Regulation (EU) 2024/1689 (AI Act). Further information on previous guidance can be found in DLA Piper’s summaries of the Commission’s work on general-purpose artificial intelligence (AI) model obligations and prohibited AI practices.
This alert discusses the recent guidance from the Commission and provides an overview of key terms and definitions for stakeholders.
What is a “serious incident”?
The AI Act defines a serious incident as an:
“incident or malfunctioning of an AI system that directly or indirectly leads to […] the death of a person, or serious harm to a person’s health; a serious and irreversible disruption of the management or operation of critical infrastructure; the infringement of obligations under Union law intended to protect fundamental rights; [or] serious harm to property or the environment” (Article 3(49)).
As is outlined below, many of the terms used in this definition are either undefined or vaguely described within the text of the AI Act and have been subject to broad industry criticism due to the confusion they introduce when seeking to build responsive compliance and reporting frameworks.
What is the guidance?
The guidance serves as the Commission’s proposed approach to three goals in relation to the serious incident reporting obligations of Article 73:
- Clarifying key undefined or vague terms
- Distinguishing the different reporting requirements across operator roles, and
- Outlining how the requirements interplay with other EU incident reporting obligations.
The guidance also provides a template for reporting incidents that can be used to ensure that the correct minimum information is recorded and provided to market supervisory authorities.
Distinguished reporting obligations
A key component of the guidance is that it highlights the (often interconnected) serious incident reporting obligations under the AI Act.
Article 73 focuses predominantly on the role of providers in reporting incidents. Broadly, providers are organizations that develop, have developed on their behalf, or substantially modify AI systems. These organizations are subject to the most comprehensive reporting obligations under Article 73, including requirements to proactively report serious incidents to market surveillance authorities within the EU Member State in which the incident occurred.
Article 73 highlights that deployers (ie, those who use an AI system under their control) also have an active role in identifying and mitigating serious incidents, including prompt reporting to the provider of an AI system when an incident is identified. Where the provider cannot be reached, deployers are also expected to cooperate with the market surveillance authority to ensure that incidents are quickly managed and wider impacts are mitigated.
The guidance briefly outlines the responsibilities of several other stakeholders, including the roles of market surveillance authorities and the Commission itself, as part of a broader effort to establish collective monitoring of AI systems.
In doing so, the guidance highlights the connected nature of reporting obligations under the AI Act and that operators and other stakeholders each play their part in market surveillance and ongoing monitoring.
Clarification of key terms
The guidance clarifies many uncertain terms used in the reporting obligations. Key examples are outlined below.
Incident and malfunction
Serious incident reporting requires that an “incident” or “malfunction” occur, neither of which is clearly defined in the AI Act. While the Commission did not provide definitions of the terms, the guidance provides insight into what may constitute an incident or malfunction.
For example, “incidents” are generally “unplanned/programmed deviation[s] in the characteristics of performance” of an AI system, particularly those with actual or potential negative consequences (eg, harm to humans or critical systems).
The distinction between “incident” and “malfunction” in the definition of “serious incident” implies that there is a distinction between the two terms. The guidance clarifies that malfunctions, comparatively, are instances where the AI system does not perform as intended or fails to meet the performance levels outlined by the provider in its technical documentation.
The guidance clarifies that there should not be a strict distinction between incident and malfunction during interpretation. Instead, they should serve as an “emphasis on the importance of malfunction in the context of incident monitoring.”
Examples of incidents or malfunctions include misclassifications, significant drops in accuracy, temporary system downtime, and unexpected system behavior.
Direct and indirect cause
For a serious incident to occur, the incident or malfunction must be related to the AI system in question.
Direct causation is characterized by the “but for” test, which holds that “but for the AI system, the harm would not have occurred.”
Indirect causation, comparatively, is characterized as instances where the harm or impact is a secondary effect. The guidance includes the following examples of indirect causation:
- When an AI system provides an incorrect analysis of medical imaging, leading a physician to make an incorrect diagnosis or treatment decision, which subsequently causes harm to the patient
- When an AI-based credit scoring system incorrectly flags the unreliability of a person, and a loan is denied based on this decision
The guidance notes that causation should be limited to cases where the AI system is used as intended or in a reasonably foreseen manner, and not in situations where the incident results from incorrect or unexpected use.
Serious harm to a person’s health
The guidance does not provide a specific set of parameters for what constitutes serious harm to a person’s health, but provides illustrative examples, including:
- Life-threatening illness or injury
- Temporary or permanent impairment of a body structure or a body function, and
- Conditions necessitating hospitalization or prolongation of existing hospitalization.
The common theme throughout the examples provided is the significant extent of the harm, even when the effect is transient.
Serious harm to property
Broadly, damage is deemed to be serious if the damage or destruction impairs the intended use or substance of the property to an extent that it can no longer be used for its intended purpose.
The guidance provides several considerations when assessing harm to property, including:
- Economic impact, including costs of repair
- Cultural or historical significance of the property, and
- The extent to which the harm affects the livelihood or quality of life of individuals or communities
Serious harm to the environment
As with many of the relevant terms, what constitutes serious harm to the environment is not defined in the AI Act. The guidance looks to other EU legislation for insight, including the European Liability Directive and Environmental Crime Directive, which intend to limit harm to protected species, habitats, water, and land.
The guidance identifies several common themes across these applicable regulations that should also be considered in the context of the AI Act, including:
- Whether damage is long-lasting
- Whether damage is extensive
- Whether damage is reversible
- Whether environmental resources are contaminated
- Whether natural ecosystems are disrupted
Harm to the collective interests of individuals
While undefined in the AI Act, the concept of “collective interests of individuals” is common throughout EU regulation. The guidance clarifies that, in the context of the AI Act, this should be interpreted as interests (protected by EU law) that are collectively shared by a group of individuals. Examples of these interests could include:
- Environmental protection
- Public health
- Proper function of democratic institutions
Serious and irreversible disruption
The AI Act does not provide a threshold for what is considered “serious” in this context. The guidance provides several qualities to consider when assessing disruptions to determine whether they are serious, including:
- Whether the disruption requires rebuilding of physical infrastructure, or whether specialized equipment was destroyed
- Whether the disruption contaminates water, soil, or air
- Whether the disruption causes permanent disablement of critical components of infrastructure, such as power substations
Examples of serious incidents outlined in the guidance include:
- Disruptions that result in an imminent threat to life or safety
- Destruction of key infrastructure
- Disruption to economic activities
Broader reporting obligations and critical infrastructure resilience frameworks
The guidance concludes by briefly outlining the interplay with other incident reporting obligations in the EU and provides several examples of AI systems that require multiple reporting processes to be followed.
Operators involved with high-risk AI systems often have multiple reporting obligations that extend beyond the AI Act. For example, AI systems functioning as medical devices and leveraging personal data and sensitive health information require compliance with horizontal (eg, General Data Protection Regulation) and sector-specific (eg, Medical Device Regulation) regulations, each of which includes its own monitoring, reporting, and market authority cooperation requirements.
The guidance also has implications for those operating infrastructure supporting important services. The concept of “serious and irreversible disruption of the management or operation of critical infrastructure” in Article 3(49) of the AI Act also brings the Act into alignment with the EU’s broader critical infrastructure resilience agenda. In particular, the Directive (EU) 2022/2557 on the resilience of critical entities (CER Directive) and the NIS 2 Directive (EU) 2022/2555 (NIS 2 Directive) both impose obligations on operators of essential infrastructure to identify, assess, and report incidents that significantly disrupt the provision of essential services. These frameworks share the same policy objective as the AI Act’s incident-reporting regime – ensuring the continuity and safety of critical functions that underpin the EU’s economic and societal stability.
While the AI Act focuses on whether AI might be the cause of a failure, the CER and NIS 2 Directives take a functional perspective, assessing whether essential services have been compromised irrespective of the technological cause. As a result, operators deploying AI within energy grids, transport networks, health systems, or financial-market infrastructure may find themselves subject to multiple concurrent notification duties: under the AI Act (to market-surveillance authorities), under the NIS 2 Directive (to competent cybersecurity authorities or computer security incident response teams), and under the CER Directive (to national resilience coordinators). The Commission’s guidance implicitly recognizes this overlap and encourages thorough internal processes for triaging and escalating incidents to avoid duplication and inconsistent reporting.
From a governance standpoint, organizations are encouraged to view AI-related serious incident reporting as part of a single integrated resilience and risk-management framework. The same monitoring, logging, and forensic-analysis tools used to meet NIS 2 and CER requirements may be updated to also support AI Act compliance. Additionally, companies are encouraged to review incident-classification matrices to ensure that definitions of “serious,” “substantial,” and “critical” disruption are applied consistently.
This interplay is a reminder that AI systems are monitored and regulated by a broad range of regulations, and compliance with the AI Act is only part of a successful AI and technology compliance framework. Organizations are encouraged to consider the AI Act as one of many influences when developing compliance processes in which the AI system is used.
Next steps
Stakeholders are encouraged to participate in the shaping of the interpretation of serious incident reporting obligations through an open consultation on the guidance.
The consultation will run from September 26 to November 7, after which feedback will be reviewed and the guidance will be finalized.
Find out more
DLA Piper’s team of AI lawyers, data scientists, and policy advisers helps organizations navigate the complex workings of their AI systems and comply with current and developing regulatory requirements. The firm continuously monitors updates and developments arising in AI and their impact on industry across the world.
For more information on AI and the emerging legal and regulatory standards, please visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI strategy through DLA Piper’s AI ChatRoom series.
For further information or if you have any questions, please contact any of the authors.


