Add a bookmark to get started

10 April 20242 minute read

Ensuring excellence: Strategies for rigorous testing of healthcare AI systems

April 10, 2024

The rapid and accelerating pace of AI adoption in healthcare has raised concerns about the potential for AI tools to exacerbate health inequities, inaccuracies, and biases. Consumer advocates and regulators are increasingly calling on AI adopters in healthcare to demonstrate the effectiveness and fairness of their AI applications. Recent developments like the White House Executive Order are accelerating the need for healthcare and life sciences organizations to consider the role of AI testing in their development and deployment of AI systems.

DLA Piper’s AI and Data Analytics group, Duke Institute for Health Innovation (DIHI), Micky Tripathi of the Office of the National Coordinator for Health Information Technology (ONC), and Troy Tazbaz of the US Food and Drug Administration (FDA), address the following topics:

  • Healthcare AI testing and the regulatory landscape
  • Problems created by the expanding use of AI in healthcare
  • Solutions to those problems, such as AI assurance testing – including accuracy, bias, and discrimination testing


Micky Tripathi

Micky Tripathi, Ph.D., M.P.P.

National Coordinator for Health Information Technology, ONC
Troy Tazbaz

Troy Tazbaz

Director of the Digital Health Center of Excellence, FDA
Suresh Balu

Suresh Balu, M.B.A.

Associate Dean for Innovation and Partnership, DIHI
Mark Sendak

Mark Sendak, M.D., M.P.P.

Clinical Data Scientist, DIHI
Danny Tobey

Danny Tobey M.D., J.D.

Chair, AI and Data Analytics, DLA Piper
Sam Tyner-Monroe

Sam Tyner-Monroe, Ph.D.

Managing Director of Responsible AI, DLA Piper

Related insights