Add a bookmark to get started

11 April 20249 minute read

Deciphering OMB's guidance to agencies on AI: A blueprint for responsible AI in government

The issuance of memorandum "M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence" by the Office of Management and Budget (OMB) marks a pivotal moment for the integration of artificial intelligence (AI) within federal agencies. It seeks to mandate a balanced approach that seizes AI's potential while safeguarding public rights and safety.

In this update, we outline the memorandum's key directives, emphasizing governance, innovation, and risk management, and offer strategic guidance for compliance and ethical AI deployment for both the US government and businesses.

Background

The OMB issued comprehensive memorandum to heads of executive departments and agencies in an effort to lay the groundwork for responsible AI use within the federal government. This initiative underscores the importance for AI developers, deployers, and users of advancing AI governance, fostering innovation, and meticulously managing associated risks, particularly those impacting the public.

Understanding the OMB memorandum: A new era for federal AI use 

The memorandum represents a comprehensive effort to align the expansive capabilities of AI technologies with ethical, legal, and social standards expected by the public. The release acknowledges AI's potential to revolutionize public service delivery through enhanced efficiency and innovative solutions but also underscores the critical need for robust governance frameworks to manage the inherent risks associated with AI deployment. 

Through the introduction of mandates such as the appointment of chief AI officers (CAIOs) and the development of detailed AI strategies, the memorandum seeks to ensure that AI systems are developed, deployed, and monitored with a keen focus on accountability, transparency, and fairness. The initiative aims to safeguard against biases, privacy infringements, and other potential harms related to AI use.

Action plan for responsible AI integration

The memorandum envisions the following specific calls to action:

  • Designating CAIOs: The head of each agency must designate a CAIO within 60 days. To enable CAIOs to fulfill their responsibilities, agencies with existing CAIOs must determine whether they must provide additional authority or appoint a new CAIO. Agencies will be required to identify these officers to OMB through the designation data collection process.

  • Convening agency AI governance bodies: Within 60 days, each agency identified under the Chief Financial Officers Act[1] must convene its relevant senior officials to coordinate an approach to effective governance of issue areas related to federal use of AI.

  • Compliance plans: Within 180 days, and every two years thereafter until 2036, each agency must submit to OMB and post publicly on the agency’s website either its chosen plan to meet the requirements of the memorandum or a determination that it does not use and has no anticipated use of AI covered by the memorandum. Agencies must also include any plans to update existing AI principles and guidance to ensure consistency with the wider provisions of the memorandum’s approach.

  • AI use case inventories: Each agency (excluding the Department of Defense and the Intelligence Community) must inventory its AI use cases on an annual basis, submit the inventory to OMB, and share publicly on the agency’s website. Agencies will be expected to complete this requirement in accordance with OMB guidelines, which are to be issued before the end of 2024. Inventories should identify which use cases are safety-impacting or rights-impacting AI and indicate any identified risks and how they are being managed.

  • Reporting on AI use cases not subject to inventory: The memorandum identifies that some AI will not require to be inventoried, such as those used by the Department of Defense. In such cases, agencies must report and release aggregate metrics on use cases and their compliance measures in accordance with the requirements of the memorandum.

Key components of the memorandum: Risk management and impact assessments

At its core, the memorandum emphasizes the need for robust governance structures, spearheaded by the appointment of CAIOs within each agency. These CAIOs are tasked with the critical role of overseeing the ethical deployment of AI technologies, ensuring that AI systems are developed and utilized in a manner that aligns with public interests and safeguards civil liberties.

In addition to establishing clear leadership roles and plans for action, the memorandum mandates the development of comprehensive AI strategies by each agency. 

The document also outlines specific risk management practices that agencies are required to follow, particularly for AI applications that have a significant impact on public safety and rights. This includes the deployment of comprehensive AI impact assessments, which are required to be put in place no later than December 1 2024. The purpose of these impact assessments is to provide quantifiable information regarding the outcomes of the deployment of AI, including cost reduction, risks to the safety or rights of individuals, and any clear steps for mitigations. In cases where quantifiable information is not likely to be produced, agencies will be required to provide qualitative analysis in their assessments to demonstrate anticipated improvements to the efficiency and safety of their work.

The memorandum closely focuses on the quality and relevancy of data being used by agencies in its mandate of these impact assessments. To this effect, the memorandum requires that agencies assess the quality of the data at the point of design, development, training, testing, and operation of any AI systems or models. Where information cannot be obtained directly by an agency, information must be obtained from any applicable vendors. At minimum, agencies will be expected to document and report:

  • The process in which data is collected

  • The quality and representativeness of the data

  • The overall relevancy of the data in relation to the specific task

  • Whether the data is sufficiently expansive to account for a range of real-world inputs that might be encountered and the method used to account for data-gaps, and

  • Whether the data is publicly disclosable (where data is in the control of the federal government).

The specific identification of risk management practices and impact assessments aligns with a growing international trend of industry standards bodies and regulators seeking to encourage (and/or mandate) the inclusion of comprehensive governance tools in departmental and organizational AI Frameworks. As is the case with other international approaches, such as that found in the EU’s AI Act, the memorandum gives broad considerations and direction to the implementation of these requirements. Specific steps and protocols are therefore likely to be filled by existing and developing international and technical standards, such as those under development by NIST, ISO, CEN, and CEN-ELEC (eg, ISO 42001:2023, which details an approach to AI management systems, and ISO 42005, which is being developed to provide clear steps for the implementation of AI impact assessments).

It is anticipated that further guidance on how this will be implemented and what this means to organizations working in partnership with government agencies will be created in the coming months.

Broader implications for federal AI adoption 

The memorandum demonstrates a shift in the federal government's approach to AI and sets a precedent for technical developments across various sectors engaged in AI technologies. Its directives signal far-reaching implications for a wide array of stakeholders, encouraging a holistic view of AI's role in public service that emphasizes the potential for improved efficiency and innovation alongside ethical considerations, transparency, and accountability.

For AI practitioners and developers in the public sector, the memorandum presents an opportunity to lead in the creation of ethically aligned systems. This may lead to an increased demand for expertise in ethical AI, setting new development standards that are likely to become best practices across the industry. Policymakers and regulators are called upon to create dynamic regulations that keep pace with technological advancements, focusing on protecting citizens' rights while facilitating innovation.

Industry stakeholders are likely to experience a push towards enhanced diligence in AI integration, aligning their internal governance models with the memorandum's recommendations to pre-empt regulatory expectations. Clients of law firms, particularly those in sectors in which AI plays a significant role, will need to navigate the new federal guidelines' implications on their operations, seeking guidance on AI system assessments and reviews of legal frameworks.

For the broader legal community, a new proficiency in AI-related jurisprudence is emerging, requiring an understanding of AI intricacies to advocate effectively for clients. Negotiations involving AI technologies, addressing AI-related disputes, and assisting in AI patenting will likely become more prevalent.

Stay ahead of the field


DLA Piper’s team of lawyers and data scientists have extensive experience in navigating AI in the public sector, aided by professionals in the field of policy development in the US and internationally. Our combined knowledge equips us to navigate the intricacies of AI governance, innovation, and risk management, ensuring your AI systems not only comply with current mandates but also anticipate future regulatory developments, including those called out by the memorandum.

DLA Piper

As part of the Financial Times’s 2023 North America Innovative Lawyer awards, DLA Piper’s was conferred the Innovative Lawyers in Technology award for its AI and Data Analytics practice.

DLA Piper’s AI policy team in Washington, DC is led by the Founding Director of the Senate Artificial Intelligence Caucus.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI strategy through our newly released AI Chatroom series.

For further information or if you have any questions, please contact any of the authors.

 

[1] The Department of Agriculture, The Department of Commerce, The Department of Defense, The Department of Education, The Department of Energy, The Department of Health and Human Services, The Department of Homeland Security, The Department of Housing and Urban Development, The Department of the Interior, The Department of Justice, The Department of Labor, The Department of State, The Department of Transportation, The Department of the Treasury, The Department of Veterans Affairs, The Environmental Protection Agency, The National Aeronautics and Space Administration, The Agency for International Development, The General Services Administration, The National Science Foundation, The Nuclear Regulatory Commission, The Office of Personnel Management, The Small Business Administration, The Social Security Administration.

 
Print