Add a bookmark to get started

24 July 20235 minute read

The NAIC Model Bulletin on Algorithms and Predictive Models in Insurance: Key takeaways

The National Association of Insurance Commissioners (the NAIC), a national standards setting organization operating in the insurance industry, has released an exposure draft of the NAIC Model Bulletin: Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers.

This Bulletin, issued on July 17, 2023, is intended to be adopted by state departments of insurance and distributed to insurers to provide guidance on the use and implementation of advanced analytical and computational technologies, including artificial intelligence (AI) systems. In it, the NAIC outlines expectations for how insurers will govern the development and use of these technologies.

The Bulletin makes clear that insurers are expected to adopt practices – including governance frameworks, risk management protocols, and testing methodologies – that are designed to assure that the use of AI systems does not result in unfair discrimination or unfair claims settlement practices. It goes on to provide guidelines for AI system programs and provides insight into what insurers can expect in the context of an investigation or market conduct action related to their use of AI systems.

The Bulletin acknowledges that, like many other industries, the insurance industry is implementing AI techniques across all stages of the insurance life cycle, and these techniques are having a transformative effect. The NAIC is supportive of the development and use of innovation and AI systems that contribute to safe and stable insurance markets, while it notes that AI systems may pose risks such as bias, unfair discrimination, and data vulnerability.

When implementing advanced analytical and computational technologies to make or support decisions affecting consumers’ interests, the bulletin says, insurers are expected to comply with all applicable insurance laws and regulations as well as with the Principles of Artificial Intelligence that were adopted by the NAIC in 2020. That publication underscores the importance of the use of AI systems that are fair, accountable, ethical, transparent, secure, and robust.

Moreover, the Bulletin emphasizes that these requirements apply regardless of the methodology used by the insurer to develop rates, rating rules, or rating plans that are subject to regulation.

AIS programs

The NAIC also provides guidelines regarding the implementation and adoption of written programs governing the use of AI systems (AIS programs). AIS programs are designed to assure that decisions impacting consumers which are made or supported by AI systems are accurate and do not violate unfair trade practice laws or other applicable legal standards. The NAIC provided guidelines include general information about the purpose and structure of an AIS program, as well as recommendations related to program governance, risk management, internal controls, and the implementation of third-party AI systems. The guidelines outline a framework that insurers may follow to develop, implement, and maintain their own AIS programs in order to assure decisions made using AI systems meet all applicable legal standards.

Regulatory oversight

Finally, the Bulletin provides information related to the regulatory oversight of insurers’ use of AI systems. This guidance includes an outline of information requests an insurer can expect during an investigation or market conduct action. The NAIC notes that insurers will be asked to provide detailed documentation and information on AI system governance, risk management, and use protocols as well as information related to the data, models and AI systems developed by third parties. The Bulletin explains that the existence of an AIS program will facilitate such investigations and actions.

Key takeaways

The NAIC Model Bulletin provides guidelines for insurers to use when implementing AI systems to ensure that their use complies with all applicable federal and state laws and regulations. It emphasizes the importance of AIS programs, AI governance and documentation.

To comply with the Bulletin’s guidelines, the NAIC suggests several actions insurers may implement:

  • Verify: Use of verification and testing methods for AI systems which investigate the existence of unfair bias that leads to unfair discrimination.

  • Govern: Adoption of robust governance, risk management controls, and internal audit functions to mitigate the risk that AI systems will violate unfair trade practice laws and other applicable legal standards.

  • Document: Adoption of a written program for the use of AI systems designed to assure that decisions impacting consumers which are made or supported by AI systems are accurate and do not violate unfair trade practice laws or other applicable legal standards.

DLA Piper is a market leader when it comes to advising and working with insurers on all aspects of artificial intelligence. From testing AI systems to the set-up and implementation of AI governance frameworks, DLA Piper has the experience to help insurers mitigate their legal risks.

We are tracking these issues closely and carefully and will continue to report on developments as the occur.

If you would like to discuss how this affects your company and possible strategies for compliance, please contact any of the authors. To find out more about our AI and Data Analytics work, please visit this page.


Print