Add a bookmark to get started

10 January 202410 minute read

AI in healthcare: Latest moves in Congress and what they portend for 2024

As the healthcare industry integrates artificial intelligence (AI) into its business, and evidence grows demonstrating the potential for AI to be transformational in healthcare, policymakers are studying the technology to understand how it might benefit or harm the health system and the people it serves.

In establishing a technology system that is useful, effective, and trusted, it is essential to build a foundation that ensures privacy, security, and data integrity. That is true for any technology application, and it is especially so for AI.

In this alert, we explore key highlights from congressional hearings in November and December 2023 that examined these and other issues regarding AI in healthcare; then we consider what these inquiries may mean for efforts at regulation and policymaking in 2024.

Our focus is on three particular hearings: 1) the Health Education Labor and Pension Committee hearing in the Subcommittee on Primary health and Retirement Security, “Avoiding a Cautionary Tale: Policy Considerations for Artificial Intelligence in Health Care”;[1] 2) the Energy and Commerce (E&C) Committee hearing in the Subcommittee on Innovation, Data, and Commerce, “Safeguarding Data And Innovation: Setting The Foundation For The Use Of Artificial Intelligence”;[2] and 3) the E&C full committee hearing, “Leveraging Agency Expertise to Foster American AI Leadership and Innovation.”[3]

Three key themes: care delivery, insurance coverage, and drug development

Increase efficiency of care delivery: AI has the potential to help physicians meet growing patient demand in a number of ways. For instance, it may increase the amount of time a physician spends with patients by reducing administrative burdens, such as doing paperwork or taking notes. AI has been integrated into chatbots that answer patient questions and has been used to create prompts for physicians to more quickly answer questions in electronic messaging systems. AI may assist medical professionals in specialized tasks such as reading radiographs and providing recommendations for treatment. More broadly, AI-driven tools have the potential to expand the diagnosis of transmissible infections such as sepsis or flu, which could lead to earlier intervention, higher rates of treatment, and earlier detection of outbreaks.

However, while AI tools may expand such abilities , there are concerns about the possible inaccuracy of an AI-driven diagnosis or recommendation. This is a particular issue if the AI tools are trained on non-representative or incomplete data sets or if their use is expanded to new patient types, such as children, without sufficient training information. Human judgement remains central, which underscores the importance of establishing a policy environment that supports the integration of AI tools into the human workflow.

Decisions to grant or deny care reimbursement: Algorithms, including those powered by AI, are used to help inform insurance coverage and care decisions. This may include direction from an insurer to discharge a patient from a site of care such as a hospital.

AI-empowered tools may look at patients’ medical histories to assess a prior authorization request – for instance, ensuring that the diagnosis and prior care are consistent with coverage requirements. Reducing abuse and earlier identification of unneeded or fraudulent care can also be enabled through AI tools.

When the AI-driven decision lacks full information, these tools may also pose risks – among them, that patients may be sent out of a setting where they are well cared for and into a setting that puts their health at risk, or that patients may be denied needed care. Such decisions may shift the burden to a home caregiver before it is medically appropriate, or may risk broader harms, such as spreading a disease that is not well controlled.

Drug discovery efficiency: AI has the potential to analyze immense clinical and medical data, including chemical and genetic targets, to identify promising new drug targets or mechanisms of action that access disease-modifying mechanisms in the body.

However, these same tools could also be used to create harmful pathogens that could be weaponized or erroneously released into the environment, jeopardizing public health security.

While the potential for harm in scientific innovation has always been present, the capacity of AI to incorporate and integrate large datasets increases the speed at which someone could potentially create a dangerous biological, chemical, or other agent, leading to significant biosecurity and bioterrorism concerns.

Policy and regulatory opportunities to foster innovation and mitigate risk

Common data sources: When electronic data capture became increasingly prevalent in healthcare delivery, the Health Insurance Portability and Accountability Act (HIPAA) was put in place to create national standards to enable safe data sharing of patient health information. Nearly 30 years later, it is still difficult to access that information, including in making decisions to inform large-scale health delivery; much of the health-related information being used by AI lives outside the confines of HIPAA protection. Data sharing is further hindered by lack of common data platforms and limited interconnection and interoperability between sites of care. Socioeconomic variables that are well known to affect care, such as income, race/ethnicity and geographic location, are not well integrated into heath datasets.

The outputs of AI algorithms are informed by the datasets they are trained on. If the datasets are biased or include non-representative data, the resulting output could lead to inaccurate decisions and poor patient care. The Biden Administration has called for aligned “industry action on AI around the ‘FAVES’ principles – that AI should lead to healthcare outcomes that are Fair, Appropriate, Valid, Effective, and Safe.”[4] In early stages, the adoption and integration of AI may also interrupt patient care. Nonuniform use across health systems and in different care settings would further affect the models’ predictions and bias. Policymakers who wish to avoid such problems may focus on developing common data platforms that are comprehensive and of major sites of care while protecting the privacy and security of patient data.

Reimbursement: Healthcare delivery is still largely fee for service. Insofar as AI may make care delivery more efficient, it may not proliferate if there is not reliable reimbursement for care providers and technology developers. For example, reimbursement for use of AI (such as devices or products that utilize AI) by the Centers for Medicare and Medicaid Services remains unclear. The use of AI needs to benefit the delivery system as well as the patient for the health industry to invest in the technology and integrate into their business practices. This may include payment for physician services that are supported by AI, such as chatbot-enabled patient messaging or apps that work directly with patients to support their care.

A healthcare system that embraces and integrates technology and innovative care delivery improvements is a stronger system for providers and patients. Policymakers will need to strike a balance between fostering an environment that incentivizes further investment in AI use in healthcare and AI’s appropriate, fair, and equitable use. As reimbursement models and incentives to adopt AI tools are developed, safeguards for inappropriate medical claim denials and inaccurate care will also be considered.

Transparency: While AI, unlike other types of structured algorithms, has the capacity to integrate large datasets, the amount of explanatory power given each variable in decision making is less clear. As AI is applied to patient care, insurance coverage, or new drug development, there is likely to be increasing interest in understanding the path to the decision and the “why” behind the algorithm’s output.

Policymakers will balance the desire for transparency and replicability with the recognition that a technology that is rapidly developing will also need an evolving transparency framework. A model trained on a dataset of today’s information may yield a different answer than a model trained on additional data, including AI-informed decisions that may change over time as new data is integrated.

Oversight: While AI is an evolving technology, old methods of oversight can still be effective in managing its use. This may include, for example, monitoring supply chains to identify shifts in the supply of inputs to dangerous chemical compounds or pathogens that may signal an inappropriate drug development, or a shift in care patterns in a particular population, such as elderly, indigent, or particular racial or ethnic minorities, that may indicate biased decision making.

Policymakers will consider how to use the more traditional oversight tools to protect public safety and privacy, potentially integrating AI into these oversights while not limiting the potential of AI-enabled tools to improve health or lower costs.

Administrative and regulatory action to watch for in 2024

In October 2023, President Joe Biden issued an Executive Order with the goal of managing the risks and opportunities of AI – see our DLA Piper insights on the Executive Order here. As part of the Administration’s interest in regulating AI, the Department of Health and Human Services (HHS) has been directed to take a number of steps, including establishing an AI task force charged with developing a strategic plan for AI technologies that are used in various healthcare industries and sectors.

As part of HHS’s actions to regulate AI and other machine learning technologies, it plans to create a safety program dedicated to rooting out errors in AI technologies used in healthcare and a surveillance database for keeping track of discrimination resulting from the use of AI. HHS will evaluate how AI technologies are impacted by laws that protect against discrimination to ensure compliance and identify gaps that lead to risk of discrimination.

The Food and Drug Administration (FDA) has already embraced the promise of AI and is working to effectively assess its risks as part of ongoing regulatory improvement efforts. What remains to be seen is how FDA will fully leverage AI as part of drug development, where and how it may apply guardrails, and the way in which it will communicate any such guardrails to drug developers as the use of AI continues to evolve in clinical research and care settings.

Conclusion

The potential of AI to significantly improve our nation’s healthcare industry, for instance by speeding up the drug review process or reducing administrative burdens, can only be fully achieved if there is sufficient investment in and adoption of the technology. As this process moves forward, government support and oversight will be key.

In the near term, we anticipate that policymakers will focus on the potential risks of AI-driven decision making, including inappropriate care denials, biased decision making, and harmful drug or pathogen development. Policymakers typically emphasize risk mitigation early in technological development.

Companies that depend on supportive AI policy environment for regulation that supports the proliferation of those tools in healthcare for patient care, drug development, and claims analysis may emphasize the ways they are ensuring their tools are integrating diverse data and allowing for human judgement.

For additional information about this alert or the ways we support organizations that use and develop AI-based technologies, please contact any of the authors or your usual DLA Piper contact.

[1] Avoiding a Cautionary Tale: Policy Considerations for Artificial Intelligence in Health Care 11-8-23 https://www.help.senate.gov/hearings/avoiding-a-cautionary-tale-policy-considerations-for-artificial-intelligence-in-health-care
[2] Innovation, Data, And Commerce Hearing: "Safeguarding Data And Innovation: Setting The Foundation For The Use Of Artificial Intelligence" 11-29-23 accessed https://energycommerce.house.gov/events/health-subcommittee-hearing-understanding-how-ai-is-changing-health-care
[3] https://energycommerce.house.gov/events/full-committee-hearing-leveraging-agency-expertise-to-foster-american-ai-leadership-and-innovation
[4] https://www.whitehouse.gov/briefing-room/blog/2023/12/14/delivering-on-the-promise-of-ai-to-improve-health-outcomes/

Print