14 October 2025

EIOPA publishes opinion on AI governance and risk management

On 6 August 2025, EIOPA published an opinion aimed at clarifying how the interpretation of insurance regulations should be aligned with the provisions of the AI Act. The document doesn't introduce new rules but provides interpretative guidance that situates AI within the existing regulatory framework (Solvency II, IDD, DORA, GDPR).

Its objective is to provide proportional governance and risk management criteria capable of mitigating risks and protecting policyholders.

 

Scope of the opinion

AI is expected to play an increasingly pivotal role in the digital transformation affecting all economic sectors, including insurance.

In insurance, the use of AI systems spans the entire value chain: from underwriting and premium setting to claims management and fraud detection. The opportunities are clear: faster, automated settlements, more granular and accurate risk assessments, and sophisticated tools to detect fraudulent activities. But these benefits come with risks, mainly due to the limited transparency and opacity of some models, with potential systemic biases and discriminatory effects on policyholders.

With the entry into force of Regulation (EU) 2024/1689, known as the AI Act, the EU introduced a comprehensive and cross-sectoral approach to AI regulation, based on a risk-based logic. The regulation classifies AI systems according to their level of risk and, in the insurance sector, considers high-risk systems those used for risk assessment and pricing in life and health insurance. These cases remain outside the scope of EIOPA’s opinion and are subject to the strict governance and risk management obligations already required under the AI Act. Conversely, for systems that don't fall into these categories, the opinion indicates that insurance sector legislation is complemented by transparency obligations, the promotion of internal training, and the adoption of codes of conduct, fostering responsible and coordinated AI management.

It’s in this context that EIOPA’s opinion is framed, aiming to clarify the interpretation of sectoral legislation in the context of AI systems that were non-existent or not widely used when the legislation was adopted. As noted, the document doesn’t create new obligations but defines proportional, risk-based supervisory expectations, proposing a systemic approach adapted to the specificities of each use case. Particular emphasis is placed on ensuring European-level consistency, referencing the definition of “AI system” in the AI Act and in the European Commission’s AI Office guidelines, while leaving room for future interpretative clarifications. Even independently of the formal qualification as an “AI system,” insurance legislation already provides governance and control measures for using machine learning-based models.

 

AI governance and risk management

The topic of governance is embedded within an already complex regulatory framework (Solvency II, IDD, DORA), which converges on a key principle: governance and risk management systems must be effective, proportionate and commensurate with the complexity of the operations.

According to EIOPA, the first step is a thorough risk assessment of the adopted systems, as their impact is not uniform. Systems processing sensitive data or affecting decisions critical to clients require stronger controls, while others can be managed with simplified procedures.

The assessment should consider:

  • the volume and sensitivity of the data processed
  • the characteristics of the client base
  • the system’s level of autonomy and its application (internal or consumer-facing)
  • the effects on fundamental rights (non-discrimination and financial inclusion)
  • prudential implications (operational continuity, solvency, reputation)

Based on this analysis, insurance undertakings are expected to adopt proportionate mitigation and management measures, ranging from human oversight to data governance, including tools to address model opacity. EIOPA’s approach is flexible: it doesn’t impose a single model but recommends interventions calibrated to actual risks.

In line with Solvency II, IDD, and DORA requirements, insurance undertakings must ensure responsible AI use by developing risk-based and proportionate governance and risk management systems, considering: fairness and ethics, data governance, documentation and record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. These principles shouldn’t be addressed in isolation but in an integrated and coherent manner, documented at the organisational level, and applied throughout the system’s lifecycle.

The opinion also emphasises the clear definition of roles and responsibilities. Insurance undertakings remain responsible for the systems used, even if developed by third parties. In such cases, suppliers must provide adequate assurances and - where limited by intellectual property constraints - complementary contractual solutions, SLAs, audits, or due diligence tests should be implemented.

EIOPA stresses a client-centric approach: acting honestly, fairly, and professionally involves fostering an ethical corporate culture, promoting staff training, adopting data governance policies to reduce biases and make outcomes understandable, regularly monitoring systems, and providing clear redress mechanisms.

Particular attention is given to data governance: data must be complete, accurate, adequate, and properly documented throughout the system lifecycle, including third-party data. Ultimate responsibility always lies with the insurance undertaking. Documentation should ensure traceability, and results must be explainable both to authorities (in a technical and global language) and to clients (in a clear and comprehensible manner).

 

Human oversight, accuracy, robustness, and cybersecurity

Human oversight must be ensured throughout the system lifecycle. Roles and responsibilities should be clearly defined, with appropriate escalation procedures, training programs and dedicated staff where needed.

Simultaneously, systems must ensure accuracy, robustness, and cybersecurity in line with Solvency II and DORA principles. They should be monitored via specific metrics, including fairness indicators, tested in interactions via APIs, and made resilient to external threats such as data poisoning or cyberattacks. Insurance undertakings must also maintain up-to-date ICT infrastructure and establish continuity plans to ensure the resilience of the AI ecosystem.

 

Conclusions

EIOPA’s opinion confirms that AI isn’t an external element to insurance regulation but a factor testing its adaptability. The challenge lies not merely in adding new rules but in reinterpreting existing ones in light of the evolving technological and regulatory context, while preserving client centrality and the prudential soundness of the sector.

Print