21 July 20256 minute read

Using AI to commit insurance fraud

AI has many benefits. It can improve efficiency, help make better decisions, and encourage innovation across different industries. But these advantages also come with serious risks – especially the potential for misuse in fraud or deception.

Like any powerful technology, AI can be used for both helpful and harmful purposes. This makes strong and thoughtful governance essential to maximize its benefits and protect against misuse.

Roberto Copia, Director at IVASS Inspectorate Service, spoke about this issue at the 4th National Congress of the CODICI Association on 17 May 2025. He pointed out a growing ethical concern: while AI can improve the efficiency of the insurance industry, it can also give fraudsters more advanced tools to commit fraud.

 

AI and Insurance: An inseparable alliance

AI has become an indispensable tool in the insurance sector. Its applications range from risk assessment to product development, claims management and fraud prevention. Predictive algorithms, neural networks, and machine learning models allow the processing of vast datasets, improving underwriting accuracy, accelerating claim settlements and strengthening insurers' anti-fraud capabilities.

But these very tools – powerful, scalable and increasingly accessible – are also being weaponized by fraudsters. As Copia noted, “Those who seek to commit fraud are often skilled innovators – frequently one step ahead of those tasked with stopping them.” (Our translation).

 

Insurance fraud in the age of AI: A quantum leap in criminal sophistication

Insurance fraud has always been a structural problem in the sector. Yet today, it’s undergoing a qualitative shift. We’re no longer dealing solely with fraudulent damage to property (Article 642 of the Italian Criminal Code) or fictitious claims. Modern fraud is digital, automated and highly sophisticated. AI has become a powerful enabler for those seeking to manipulate data, forge documents or create false digital identities.

A paradigmatic example is the Ghost Broker scam: websites that appear legitimate, often employing advanced social engineering techniques, real logos, and data stolen from unwitting intermediaries. AI allows these fraudulent portals to appear increasingly credible, complete with chatbots simulating customer service, AI-driven profiling of potential victims, and the delivery of highly personalized fake offers. The result is a seemingly flawless customer journey. But the buyer is left uninsured and unknowingly defrauded until a roadside inspection reveals the deception.

 

The cost of fraud: More than just financial

The statistics reveal a concerning trend. Forbes estimates global insurance fraud costs around USD300 billion annually.

In Italy, the situation is equally concerning: according to IVASS’ 2023 Anti-Fraud Report, the share of suspected fraudulent claims in the motor liability sector rose sharply from 16% to 21% in just one year.

But the impact of insurance fraud extends far beyond the immediate financial losses suffered by insurers. At a systemic level, the consequences are even more troubling. Public safety is compromised when uninsured vehicles remain on the roads, placing third parties at risk, who must then turn to the Guarantee Fund (Fondo di Garanzia per le Vittime della Strada), which is financed by all law-abiding, insured drivers.

Reputational damage is another major concern, as legitimate insurers and intermediaries see public trust eroded, particularly in cases involving identity theft, an offence addressed under Article 494 of the Italian Criminal Code.

The growing incidence of fraud undermines confidence in digital channels, slowing the industry’s transition to more efficient and accessible online services. And consumers themselves aren’t spared: many of the victims are young, digitally literate individuals who, despite their digital skills, often lack a clear understanding of how insurance mechanisms work, making them particularly vulnerable to scams.

 

The role of the AI Act and the institutional response

To address these emerging challenges, the EU has adopted the Regulation (EU) 2024/1689 (AI Act), a cornerstone in the legal governance of AI in the internal market. While its primary aim is to establish a risk-based framework ensuring safety and fundamental rights, the AI Act also has relevant implications for the insurance sector, particularly in relation to fraud prevention and risk management.

Some AI systems, such as those used for biometric identification, credit scoring, or emotion recognition, are classified as high-risk and must comply with strict requirements, including risk assessment, transparency, traceability, and human oversight. Although “AI systems used for the purpose of detecting financial fraud” are not classified as high-risk under the AI Act, they may be considered high-risk when combined with other high-risk systems or when they include key high-risk features. This opens the door to closer scrutiny of both the AI tools used by insurers and those misused by fraudsters.

The Regulation also introduces new obligations for AI systems with transparency risks, especially those capable of generating synthetic content, deepfakes, or convincingly imitating real individuals – frequent features in ghost broker scams and identity theft schemes. Providers and deployers of such systems now have to ensure transparency, watermarking, and documentation, enhancing the traceability of malicious uses.

These provisions, combined with sector-specific laws (eg the Digital Services Act (DSA) and the GDPR), may serve as legal levers to hold developers and operators accountable when their technologies are repurposed for fraudulent ends. More importantly, the AI Act places a duty on member states and competent authorities to monitor the use of high-risk systems and to facilitate coordination at EU level.

This provides an opportunity for national regulators such as IVASS to play a proactive role not only in supervising industry compliance but also in reporting and investigating abuses involving AI in the insurance domain.

IVASS, in cooperation with law enforcement authorities, has introduced a range of countermeasures to combat digital insurance fraud. These include the takedown of fraudulent websites, public awareness campaigns, network analysis, centralized claims databases, the creation of white lists, and the use of AI tools to identify suspicious patterns. Despite these efforts, institutional responses often struggle to keep pace with the rapid evolution of fraud techniques. At the same time, many insurers are investing in anomaly-detection systems and cross-verification technologies. Yet, criminal complaints remain limited, and fraud continues to be viewed – too often – as an unavoidable cost of doing business.

 

Towards a new paradigm: Accountability, awareness and law

Addressing this challenge requires both technical solutions and a cultural shift. Legal frameworks has to catch up with the rapid rise of generative AI, especially when it’s used for criminal purposes like counterfeiting, fraud or identity theft. Moving forward, several key actions are critical:

  • Automated accountability mechanisms: AI systems must not only detect anomalies but also certify and document fraudulent activity to support legal proceedings.
  • Enhanced public-private collaboration: stronger data sharing and joint risk analyses between regulators and private actors are needed.
  • Mandatory consumer education: like in the banking sector, consumers should receive mandatory training on the safe and informed use of digital insurance platforms.
  • A harmonized EU framework: a cohesive European approach to ghost broking and digital insurance fraud is required, integrating the objectives of the DSA with sectoral supervisory efforts.
Print