White House releases guidance for AI acquisition and use in government
New guidance emphasizes risk-based governance, requires “impact assessments”On April 7, 2025, the White House Office of Management and Budget (OMB) released two memoranda, outlining its latest guidance for federal agencies on the acquisition and use of AI.
The two memoranda, titled “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust” and “Driving Efficient Acquisition of Artificial Intelligence in Government,” were circulated to agencies on April 3, 2025 and encompass the White House’s latest steps toward encouraging innovation and free market practices in AI.
The new White House guidance aligns closely with federal legislation introduced last Congress by Senator John Thune (R-SD). That legislation, the Artificial Intelligence Research, Innovation, and Accountability Act (AIRIA) of 2023, was considered by many to have been the most comprehensive AI federal legislation governing AI in the United States. It passed out of the Senate Commerce Committee but did not become law. AIRIA has not yet been reintroduced this Congress, but, as Senator Thune is the new Majority Leader and remains a senior member of the Senate Commerce Committee, his AI framework may be an initiative worth monitoring as further AI legislation is introduced over the course of this administration.
As organizations consider compliance, advocacy, and pursuit of business with the federal government, they are encouraged to note the emphasis and precedents set forth in these memoranda pertaining to both “impact assessments” and “high-risk” use cases.
Below, we address each memorandum in turn.
Accelerating federal use of AI through innovation, governance, and public trust
The memorandum replaces the previous administration’s directive on the same topic and sets out several requirements that align with the government’s three AI strategic priorities: innovation, governance, and public trust.
Driving innovation
Agencies are reminded of their responsibility to remove barriers to future AI adoption and are encouraged to focus on building existing capabilities for AI development and procurement. This is intended to be achieved through several initiatives.
For example, agencies are required to develop AI strategies within 180 days. These are to be based on a future template provided by the OMB and are to be made publicly available on the agency website with a goal of ensuring accountability for spending.
Agencies are also required to engage in AI resources and data sharing where possible and leverage US innovation through the promotion of research and development of AI. Chief AI Officers will be required to coordinate with other agencies on the development of data interoperability and standardization measures that allow for sharing of commonly used data packages and take opportunities to build or procure AI.
Federal agencies are also required to develop and maintain a foundational knowledge of AI so that the workforce can leverage AI in the performance of their duties. This is expected to be achieved by creating new AI training resources, promoting AI talent from within, and remaining accountable for maintaining knowledge levels across departments.
Improving AI governance
Agencies are required to identify key individuals, such as Chief AI Officers, responsible for leading AI adoption and promotion of best practices. CFO Act agencies must additionally establish an Agency AI Governance Board within 90 days of publication. These officials will be responsible for steering the direction of many federal agency initiatives, including promotion of responsible innovation and adoption, coordination of compliance initiatives, and advising the heads of agencies on matters of AI.
Fostering public trust in federal use of AI
A key component of the latest guidance is the focus on high-impact AI systems.
High-impact AI is defined as:
“[An] AI system where the output will become the primary basis for decisions or actions that will have a legal, material, binding, or significant effect on:
1. An individual or entity’s civil rights, civil liberties, or privacy; or
2. An individual or entity’s access to education, housing, insurance, credit, employment, and other programs;
3. And individual or entity’s access to critical government resources or services;
4. Human health and safety;
5. Critical infrastructure or public safety; or
6. Strategic assets or resources, including high-value property and information marked as sensitive or classified by the Federal Government.”
Agencies are required to assess the impact and use of outputs, rather than the underlying technology, when assessing whether AI is high impact. This removes many of the issues previous regulatory efforts have had in determining whether specific technologies pose additional risks.
Agencies are instructed to manage high-impact AI systems by implementing minimum-risk management practices. These practices should prioritize safe, secure, and resilient deployment of AI that minimizes harm to individuals. Use of non-compliant AI systems is required to be terminated if they are found to contravene the requirements of the memorandum and other applicable policies.
One example of mandated risk management practices is the need to perform AI impact assessments. These assessments seek to determine key information regarding the AI, including the intended purpose and benefits, data quality, potential societal impacts, and cost analysis. Inclusion of impact assessments demonstrates the White House’s acknowledgement of the potential risks that AI presents to the US and its citizens, despite otherwise seeking to minimize barriers to innovation.
Chief AI Officers may, at their discretion, provide a waiver from risk management practices in cases where they would result in increased risk to safety or where a pilot program is intended. Where waivers are granted, ongoing review of the AI and its use is performed to minimize potential introduction of unnecessary risks.
Changing definitions
Several key terms are defined, with some indicating a deviation from the previous approach by the Biden Administration.
Of note is the definition of “AI system,” which was previously aligned to the approach taken by the international community, stemming from the definition provided by the OECD.
The memorandum, on the other hand, aligns with the Advancing American AI Act and defines AI system as:
any data system, software, application, tool, or utility that operates in whole or in part using dynamic or static machine learning algorithms or other forms of artificial intelligence, whether- (i) the data system, software, application, tool, or utility is established primarily for the purpose of researching, developing, or implementing artificial intelligence technology; or (ii) artificial intelligence capability is integrated into another system or agency business process, operational activity, or technology system; and (B) does not include any common commercial product within which artificial intelligence is embedded, such as a word processor or map navigation system.
Changes of such definitions suggest a potential deviation from the current international stance in accordance with the Trump Administration’s goal of prioritizing a US-first approach to AI. However, many US state legislators have used the international definition of AI system in their own regulations. It is therefore uncertain how this may impact regulatory development at a state level.
Driving efficient acquisition of AI in government
The second memorandum sets out guidance for improving efficiency and mission effectiveness through AI acquisition and overrides the previous OMB guidance on procurement of AI technology. While much of the previous administration’s guidance is maintained, the latest guidance introduces an additional focus on maximizing US-made AI.
Agency protocol guidance
In accordance with the Trump Administration’s latest EO on AI, agencies are required to assess and revise internal policies and procedures within 270 days of the memorandum. Revisions to existing policies are expected to implement controls that:
- Evaluate proposed AI system acquisitions during future procurements and provide feedback on AI performance and risk management
- Assemble a cross-functional team of agency officials to coordinate and assist in the decision-making process of acquisitions
- Ensure appropriate intellectual property rights terms are being used throughout contracts, and
- Ensure compliance with applicable privacy regulations and policies.
Agencies must enact processes that evaluate the use of government data and separate the ownership and IP rights of the government and those from which it receives services and tools. Although not mandated, standardization across contracts is highly recommended.
Acquisition process guidance
Agencies are expected to form an internal cross-functional team that informs procurement activities. The team will be responsible for establishing a list of potential risks that should be evaluated based on the AI system under review. The team will also be expected to implement processes that align to the AI principles established by the first Trump Administration, including the push for purposeful and performance-driven use of AI.
During the solicitation stage of AI acquisition, agencies are required to comply with several transparency requirements, including disclosure of whether a planned AI system will be used in a high-risk or high-impact use case. In such cases, vendors will be required to comply with these transparency measures and the additional requirements set out in the memorandum. Similarly, agencies are encouraged at this stage to include contractual provisions that protect against vendor lock-in and protect government IP and data rights.
When selecting proposals, agencies are required to establish measures that identify and/or mitigate procurement risks. Where appropriate, agencies are also required to include contractual measures that address government-mandated contractual matters, including i) IP rights and use of government data, ii) privacy, iii) vendor lock-in protections, and iv) ongoing testing and monitoring.
What this means for organizations developing and using AI for government purposes
Despite certain deviations, many of the practical elements of the new requirements remain aligned to existing AI governance norms in the United States. Future steps taken by agencies to enact many of these requirements are likely to provide insight into the broader direction of the administration in their use and oversight of AI. This in turn may indirectly set a new tone for AI industry and governance norms within the country as federal requirements are pushed down to organizations with which they interact, including standard-setting bodies and government contractors. Organizations should be ready to demonstrate that they meet these requirements through effective governance measures that use controls, such as AI system red teaming, to identify and mitigate risks. This is particularly relevant in the case of high-risk and high-impact use cases, which may be subject to more frequent regulatory investigation and the subject of litigation.
Find out more
DLA Piper’s team of AI lawyers, data scientists, and policy experts assists organizations in navigating the complex workings of their AI systems to guide compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impact on industry across the world.
For an overview of executive actions taken thus far by the new administration, please see this compilation published by DLA Piper.
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI strategy through our AI ChatRoom series.
For further information or if you have any questions, please contact any of the authors.