HHS releases AI Strategic Plan: Key takeaways for businesses
Ten days before a change in administration, the US Department of Health and Human Services (HHS) issued its Artificial Intelligence (AI) Strategic Plan (Strategic Plan, or Plan). While the Plan highlights the optimism and exciting future use cases of AI, HHS cautions that collectively, stakeholders must take care to ensure that the use of AI does not impact the safety, effectiveness, equity, and access to healthcare services. HHS’ approach serves as a roadmap for stakeholders across the life sciences and healthcare sectors as these emerging technologies continue to rapidly shape the delivery of healthcare.
Overview of the AI Strategic Plan
HHS’s overarching aim in its approach to AI regulation and use is to foster a coordinated public-private approach targeting four key objectives:
- Catalyzing health AI innovation and adoption to unlock new ways to improve lives
- Promoting trustworthy AI development and ethical and responsible use to avoid potential harm
- Democratizing AI technologies and resources to promote its access
- Cultivating AI-empowered workforces and organizational cultures to effectively and safely use AI
The Strategic Plan is organized into “Primary Domains” and “Additional Domains.” Primary Domains represent specific parts of the HHS value chain, including 1) medical research and discovery, 2) medical product development, safety, and effectiveness, 3) healthcare delivery, 4) human services delivery, and 5) public health.
Additional Domains are functional areas that span the Primary Domains, and are required to implement the Plan, including cybersecurity and critical infrastructure protection, and internal operations. For each domain, the Plan analyzes opportunities for AI application, current AI use trends, risks for AI use cases, and HHS’ action plan related to such domain.
Summary of action plans
The action plan for each domain is non-exhaustive, and describes existing activities and near- and long-term priorities that support HHS’ four key goals. While the action plans are detailed, HHS recognizes that each domain is an evolving space, and thus, such action plans are intended to be flexible. HHS notes that it will continue to evaluate action plans as technologies and use cases evolve. Below, we highlight some of the action plans across the domains.
Medical research and discovery
This domain focuses on research and discovery of medical products, and AI’s use in biomedicine, including through clinical trials. HHS emphasizes as a key barrier the need for large volumes of high-quality data to support AI technologies at scale in medical research. HHS intends to democratize data access by promoting public-private partnerships, supporting multi-institutional research collaborations, and promoting interoperability standards. HHS further intends to focus on equitable AI access, particularly for traditionally underserved populations, to create a more diverse and inclusive research landscape.
Additionally, HHS highlighted concerns with respect to data breaches and biosecurity. HHS anticipates providing national guidelines on protecting AI models and health data from adversarial attacks, data-sharing protocols that protect sensitive health information, and mechanisms to reduce harm from misuse of predictive analytics.
Medical product development, safety, and effectiveness
Incorporating AI in medical products and their development, use, or other operations comes with numerous risks, including patient safety and bias. HHS references the regulatory guidance on AI issued by its various divisions, and notes that there will be forthcoming additional guidance on AI oversight, clarification of AI payment pathways, and the prioritization of safe AI in resourcing programs. HHS further solidified the product life cycle approach taken by the Food and Drug Administration (FDA), emphasizing continued review and testing to ensure safe, effective, and fair application of AI outputs.
The Plan highlights that as of August 2024, FDA has authorized approximately 1,000 AI-enabled medical devices and received over 550 drug and biological product submissions with AI components (AI-Enabled Products), all with differing use case risks, but generally with the same overarching development “lifecycle.” HHS also plans to explore resourcing for research on AI bias, and plans to evaluate approaches to bolster AI quality assurance in medical products and across the medical product lifecycle.
Healthcare delivery
HHS cautions that AI within healthcare delivery may impact patient safety, deteriorate patient-provider relationships, and create barriers to or inappropriate administration of healthcare resulting from algorithmic bias. As a result, HHS will prioritize supporting research on best practices for procuring, deploying, and monitoring AI tools in healthcare delivery settings, placing patient-centric interventions, with transparency, safety, equity, and security at the forefront of implementation considerations. Under the Plan, HHS will also provide guidelines on how to test and pilot AI applications within healthcare institutions before fully implementing them into healthcare delivery. Additionally, HHS plans to disseminate AI impact and AI-readiness assessment templates, implementation toolkits, technical assistance resources, and guidelines for governance of AI for health delivery organizations considering using AI. If developed, it is expected that these tools will assist with AI adoption and uniformity of assessing potential AI use cases across healthcare delivery settings.
Cybersecurity and critical infrastructure protection
HHS recognizes that healthcare organizations are struggling to keep abreast of cybersecurity threats, and accordingly, emphasizes that securing digital systems from cyber threats is crucial for realizing the benefits and minimizing the risks of emerging technologies across domains. The action plan for cybersecurity and critical infrastructure protection builds on HHS’ Cybersecurity Strategy released in December 2023, and Cybersecurity Performance Goals released in January 2024. In part, HHS plans to:
- Address the shortage of appropriately skilled cybersecurity workers to fill roles within health information technology (IT)
- Support standardization and alignment on best practice in cybersecurity governance
- Encourage health IT developers to implement privacy and security by design in their products, or offer service application programming interfaces to integrate cyber controls into other systems, and
- Clarify the approaches to cybersecurity to navigate the tension between privacy and fairness, and privacy and safety in providing health and human services.
Takeaways from the AI Strategic Plan
Overall, the Strategic Plan presents HHS’ approach to striking a balance between fostering innovation and mitigating risks. HHS flags bias, data source quality, accessibility, and cybersecurity as key threats to innovative, safe, and responsible use of AI. HHS’ promise to release guidance, AI impact and readiness assessments, and toolkits has been welcomed by the healthcare industry. The Plan presents stakeholders with an opportunity to participate in shaping the guiding principles behind use of AI in the US healthcare ecosystem.
However, similar to the recent Proposed Rule to modify the HIPAA Security Rule, analyzed in our previous client alert, HHS’ progress in accomplishing the Strategic Plan may face challenges with the new administration, which has expressed desires to repeal the current administration’s executive order on guardrails around artificial intelligence. If so, state attorneys general, legislators, and litigants may step in with a less uniform approach to regulating and implementing AI across healthcare and life sciences settings.
DLA Piper will continue to monitor developments surrounding updates to the AI Strategic Plan. For more information about these developments, please contact your DLA Piper relationship partner, the authors of this alert, or any member of our AI or Healthcare industry groups.