Red optical fibre 2560x975

17 November 2025

AI and insurance: EIOPA responds to stakeholder concerns

Following this spring's public consultation on “AI Governance and Risk Management,” on 6 August 2025, the European Insurance and Occupational Pensions Authority (EIOPA) published its Feedback Statement responding to concerns raised by stakeholders. There are many interesting aspects to note.

 

New opinion, new obligations?

Several stakeholders appreciated the objective of EIOPA's opinion, its principles-based approach and its focus on risk and proportionality, as well as its alignment with the AI Act. But some were concerned that the repeated use of the term “should” could create implicit obligations, especially for small businesses or low-risk systems, suggesting that the opinion should be considered a mere recommendation. Others asked for more detailed guidance, such as risk assessment criteria or standardised documentation templates to facilitate its application.

Thoughts on the opinion’s timeliness are divided: some have expressed the need to wait for the implementing measures of the AI Act and any Commission guidelines on the use of AI in the financial sector. Others have pointed out that most AI systems are not “high-risk” under the AI Act, so immediate action is needed to promote a consistent approach by operators and convergence among supervisory authorities.

EIOPA stressed the importance of international convergence on the general principles of AI governance to be applied taking into account the specificities of the insurance sector.

EIOPA then clarified that the opinion doesn’t introduce new obligations but provides guidance on how to interpret insurance sector regulations in the context of AI systems, adopting a holistic approach based on risk and proportionality. According to EIOPA, the use of terms like “should” is consistent with other guidance documents and doesn’t imply binding obligations.

 

Does the opinion overlap with the AI Act?

Stakeholders agreed with the scope of the opinion, which covers AI systems in the insurance sector that aren’t considered prohibited practices or high-risk systems under the AI Act. Some requested clarification on the distinction between high-risk systems under the AI Act and those that could be considered high-risk following the impact assessment provided for in the opinion.

There was also agreement on the use of the definition of “AI systems” in the AI Act, although clarification was sought on traditional models such as Generalized Linear Models (GLM). Finally, some expressed concerns that references to Solvency II and IDD could inadvertently extend the regulatory scope, for example in relation to data management or fair treatment of consumers.

In response, EIOPA confirmed that the opinion applies only to AI systems that don’t fall within prohibited practices or count as high-risk, avoiding unnecessary regulatory overlap and burdens. And it clarified that the high-risk assessment under the AI Act has legal meaning and implications independent of the impact analysis provided in the opinion. The opinion doesn’t extend the requirements of the AI Act and recognises that risk levels may vary among systems not considered “high-risk.”

With regard to the definition of AI systems, the opinion now explicitly refers to the European Commission's Guidelines and notes that further clarification may be provided in the future. Finally, references to Solvency II, DORA, and IDD don’t change their scope of application, but the principles of the opinion, such as fairness, human oversight and data governance remain relevant and must be applied proportionately even by operators not subject to those rules, ensuring fair treatment of customers.

 

What are the burdens for small businesses and what efforts for low-risk systems?

In general, stakeholders welcomed the risk-based and proportionate approach outlined in the opinion, while expressing concerns about the possible compliance costs for low-risk AI systems and small businesses. Some asked for more detailed guidance on the impact and assessment criteria, suggesting that a distinction be made between customer-facing and internal systems, between new technologies such as generative AI and established machine learning technologies, and that the environmental impact be considered.

There were also requests for clarification on how to balance the nature of the system and its potential impact on consumers without compromising supervisory convergence at the European level.

EIOPA confirms that it will maintain its risk-based and proportionate approach, balancing opportunities and risks and limiting compliance burdens for low-risk systems and small businesses. The opinion clarifies that supervisory expectations for low-risk systems will be limited and that the conduct of the impact assessment may be proportionate to the actual impact of the system.

Without providing an exhaustive list, EIOPA has added examples of assessment criteria to the opinion, such as the distinction between customer-facing applications and internal uses with no direct impact on customers, and the number of customers involved. The assessment must consider both the impact on consumers and operators, taking into account the nature and complexity of AI systems.

 

Risk management systems and how to manage third-party risk?

Some stakeholders have asked for clarification on whether existing Enterprise Risk Management (ERM), model risk management, or Product Oversight and Governance (POG) frameworks should be used to govern the use of AI systems, without having to develop new specific tools. A distinction between the responsibilities of “developers” and “deployers” was also suggested, highlighting the difficulties in ensuring data governance or “explainability” for tools provided by third parties, such as generative AI.

EIOPA confirms that companies can use existing or updated frameworks (ERM, model risk management, POG, IT strategies, data, or AI) as long as they reflect the key principles outlined in the opinion. Furthermore, while being responsible for the AI systems used, even when developed by third parties, companies must obtain information and guarantees about the systems provided. And, if they have difficulties in implementing certain principles (eg data governance or explainability), companies have to mitigate risks with complementary measures, contractual clauses, external audits, due diligence, and continuous monitoring.

 

What now?

The conclusion of the public consultation is more of a starting point than a finish line. We await further clarification and guidelines from both EIOPA and the Commission and national authorities in the near future. In the meantime, operators must initiate and consolidate their compliance plans, navigating a complex and fragmented regulatory landscape that includes the European AI Act, Italian adaptation legislation, and guidance from supervisory authorities. This will be far from a simple and straightforward process.

Print