Add a bookmark to get started

10 April 20242 minute read

Ensuring excellence: Strategies for rigorous testing of healthcare AI systems

April 10, 2024

The rapid and accelerating pace of AI adoption in healthcare has raised concerns about the potential for AI tools to exacerbate health inequities, inaccuracies, and biases. Consumer advocates and regulators are increasingly calling on AI adopters in healthcare to demonstrate the effectiveness and fairness of their AI applications. Recent developments like the White House Executive Order are accelerating the need for healthcare and life sciences organizations to consider the role of AI testing in their development and deployment of AI systems.

Micky Tripathi of the Office of the National Coordinator for Health Information Technology (ONC), Troy Tazbaz of the US Food and Drug Administration (FDA), Suresh Balu and Mark Sendak of the Duke Institute for Health Innovation (DIHI), and Danny Tobey and Sam Tyner-Monroe of DLA Piper’s AI and Data Analytics group address the following topics:

  • Healthcare AI testing and the regulatory landscape
  • Problems created by the expanding use of AI in healthcare
  • Solutions to those problems, such as AI assurance testing – including accuracy, bias, and discrimination testing


View the video below:


Micky Tripathi, Ph.D., M.P.P.

National Coordinator for Health Information Technology, ONC

Troy Tazbaz

Director of the Digital Health Center of Excellence, FDA

Suresh Balu, M.B.A.

Associate Dean for Innovation and Partnership, DIHI

Mark Sendak, M.D., M.P.P.

Clinical Data Scientist, DIHI

Danny Tobey M.D., J.D.

Chair, AI and Data Analytics, DLA Piper

Sam Tyner-Monroe, Ph.D.

Managing Director of Responsible AI, DLA Piper

Related insights