31 October 202325 minute read

Safe, secure, and trustworthy: White House publishes Executive Order setting the foundations for future AI regulation in the US

Executive summary

On October 30 2023, the White House signed into effect an Executive Order (EO) on safe, secure, and trustworthy artificial intelligence. The EO makes sweeping mandates to the primary executive departments, pulling in the significant resources of the federal government in a way rarely seen outside of a national crisis. These mandates cover all aspects of the AI lifecycle and build on several key initiatives already taken by the White House towards the development and use of safe and responsible AI in the US. The EO requires the development (mostly within three to twelve months) of standards, practices, and even new regulation for the development and use of AI across most aspects of the economy and in significant regulated areas such as consumer finance, labor and employment, healthcare, education, national security, and others, as discussed more fully below.

Except for new reporting requirements for developers of large language models and computing clusters, most of the EO’s provisions do not immediately change regulatory requirements. The EO does, however, urge federal regulators and agencies to use their existing authority to stress test the security of AI systems, prevent harms such as discrimination, loss of employment, foreign threats to critical infrastructure, and talent vacuums. It makes clear that the resources and authority of the federal government will be focused on the safe, secure, and ethical use of AI in every major aspect of commerce and societal affairs. We anticipate that as these mandates are fulfilled, and guidelines, standards, and rules are developed over the next twelve months, significant new requirements will be forthcoming.

1. Safe, secure, and trustworthy AI

The EO stands out against a backdrop of a flurry of international activity in the AI space, with the European Union anticipated to finalize negotiations on the development of their proposed framework for regulating AI (AI Act) and the coming together of the international community at the UK’s AI Safety Summit with the goal of solidifying global cooperation.

The EO is the latest from the US in the race to lead on artificial intelligence with technology and is the first initiative from government that offers practical steps that can and should be followed by both regulators and companies in the field. This contrasts to the previous efforts of the US government, including the Blueprint for an AI Bill of Rights and the voluntary commitments secured from leading artificial intelligence companies earlier this year which, while commendable, were criticized by some for their lack of “bite” when it comes to practical and enforceable obligations.

The EO builds on these mandates and targets the following key areas:

2. Red-teaming, AI testbeds, and industry standards

Best practices and industry standards: A central component of the EO is the establishment of guidelines, standards, and best practices for AI safety and security. Specifically, the National Institute of Standards and Technology (NIST) under the Department of Commerce (DOC) is directed to promote consensus industry standards for developing, deploying safe, secure, and trustworthy AI systems. Among specific targets of this work is the development of a companion resource to the popular AI Risk Management Framework, (NIST AI 100-01), specifically for generative AI.

Guidance and benchmarks: Alongside the developments of best practices, the EO mandates the launch of an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity. This will accompany a new set of guidelines to enable developers of AI, especially of dual-use foundation models, those powerful enough that they that pose a serious risk to security, national economic security, national public health or safety, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems. NIST will coordinate with the Department of Energy (DOE) and the Director of the National Science Foundation (NSF), to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies.

System testing and monitoring: Throughout the EO, emphasis is made on the ongoing monitoring and testing of models to ensure they are deployed in a manner (and continue to perform as such) that is safe and secure. For example, the EO directs the Consumer Financial Protection Bureau (CFPB) and the Federal Housing Finance Agency (FHFA) are required to evaluate several aspects of their infrastructure, including underwriting models, automated appraisals, and automated collateral-valuation models.

3. The Defense Production Act

Legal development: As part of its focus on safety and security, the EO invokes the Defense Production Act (DPA), as amended, 50 U.S.C. 4501 et seq., to ensure and verify the continuous availability of safe, reliable, and effective AI, including for the national defense and the protection of critical infrastructure.

Dual-use foundation model obligations: Shortly after deployment of the EO, the DOC is directed to require companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the US government, on an ongoing basis, several reports focusing on information required to monitor areas of concern or potential risk to the security of the US. These reports include information such as:

  • activities related to training, developing, or producing dual-use foundation models, with focus on how they seek to address physical and cybersecurity threats
  • ownership and possession of the model weights of any dual-use foundation models and measures taken to protect these weights
  • results of any developed dual-use foundation model’s performance in relevant AI red-team testing, as indicated above and
  • descriptions of any associated measures taken by a company to meet the safety objectives and any results from red teaming that relate to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives.

Large-scale computing clusters: The EO also mandates that the DOC is directed to require companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster.

4. Privacy

The EO addresses privacy protections for data fueling AI systems in several contexts. One of the EO’s guiding principles is that Americans’ privacy and civil liberties must be protected as AI advances. The EO acknowledges that AI uniquely jeopardizes privacy by making it easier to extract, identify, and exploit individuals’ personal data, while also increasing the incentives to do so given AI systems’ need for training data. The EO’s Fact Sheet calls on Congress to pass bipartisan privacy legislation and President Joe Biden called for federal data privacy legislation during the signing ceremony at the White House.

Privacy-enhancing technologies and differential-privacy guarantees: The EO requires that federal agencies use available policy and tools, including privacy-enhancing technologies (PETs), to protect privacy and combat risks associated with the improper collection and use of personal data. PETs can be any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing. To enable easier use of PETs by agencies, the EO directs the DOC to create guidelines for evaluating the efficacy of differential-privacy-guarantee protections, including for AI. The Director of the NSF shall also (i) engage with federal agencies to identify ongoing work and potential opportunities to incorporate PETs into their operations, and (ii) prioritize research that encourages the adoption of PETs solutions for agencies’ use.

Research coordination network: To advance research and development related to PETs, the Director of the National Science Foundation shall fund the creation of a research coordination network (RCN).

The federal government’s use of commercially available information: The Director of OMB shall (i) identify and evaluate how federal agencies collect and use commercially available information (CAI), particularly CAI that contains personally identifiable information and including CAI procured from data brokers, and (ii) strengthen privacy guidance for agencies to mitigate related privacy risks.

The Director of OMB shall also evaluate, via a request for information process, potential revisions to guidance on implementing the privacy provisions of the E-Government Act of 2002, including how Privacy Impact Assessments may be more effective at mitigating privacy harms related to AI.

Other privacy impacts: Among the EO’s additional effects on privacy include:

  • encouraging agencies to implement risk-management practices to ensure compliance with record-keeping, cybersecurity, privacy, and data protection requirements
  • requiring the DOC to establish a plan for global engagement on promoting and developing AI standards
  • emphasizing that the federal government will enforce existing consumer protection laws and principles and enact appropriate safeguards against infringements on privacy and
  • requiring the Department of Health and Human Services (DHHS) to establish an HHS AI Task Force that will develop policies and frameworks as detailed further below.

5. Cybersecurity

The EO recognizes that for AI to be safe and secure, cybersecurity must be addressed in addition to privacy measures.

US Infrastructure as a Service: The EO acknowledges the unique risks related to the use of US Infrastructure as a Service (IaaS) products by foreign malicious cyber actors and seeks to prevent powerful AI models that can be created from these products from falling into their hands. Accordingly, the DOC shall propose regulations that require US IaaS providers to submit a report “when a foreign person transacts with that IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity.”[1]  These regulations will also require that US IaaS providers prohibit foreign resellers of their services from provisioning their services unless the foreign reseller submits a similar report.

Defining technical conditions: The DOC shall also determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity. Until then, the EO lists interim criteria, defined by the amount of computing power required to create them, under which models will be considered to have potential capabilities that could be used in malicious cyber activity.[2]

Cybersecurity and critical infrastructure: Relevant agencies shall assess and report to the Department of Homeland Security (DHS) on the potential risks related to the use of AI in critical infrastructure (eg, critical failures, cyberattacks). They shall also consider potential mitigations.

Security guidelines and reporting: The DHS shall issue safety and security guidelines for critical infrastructure owners and operators based on NIST AI 100-1. Within eight months of the issuance of those guidelines, steps will be taken to mandate them. An AI Safety and Security Advisory Committee will also be established to advise on improving security, resilience, and incident response related to AI usage in critical infrastructure. Next spring, the public can also expect a report from the Department of the Treasury on best practices for financial institutions to manage AI-specific cybersecurity risks.

Improving United States cyber defenses: The EO recognizes the potential benefits of AI in the cyber context and requires the Secretaries of DoD and DHS to each conduct an operational pilot project to deploy AI capabilities to discover and remediate vulnerabilities in critical US government software, systems, and networks. They will each issue a report on vulnerabilities found and fixed through AI capabilities, as well as lessons learned on how to effectively use AI capabilities for cyber defense.

Other cybersecurity impacts: Among the EO’s additional effects on cybersecurity are:

  • Requiring the DOC to launch an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as cybersecurity and
  • Requiring companies developing dual-use foundational models to report on an ongoing basis to the federal government information regarding cybersecurity protections taken to assure the integrity of their training process against sophisticated threats.

6. Federal Trade Commission (FTC) and Federal Communications Commission (FCC)

The EO’s scope extends beyond AI-specific infrastructure and seeks to direct agencies, in particular the FTC, to consider how AI will likely affect consumers and network infrastructure. The EO encourages the FTC to exercise existing authorities including rulemaking authority. The FTC has previously declared it would enforce Section 5 of the FTC Act, Fair Credit Reporting Act, and Equal Credit Opportunity Act as examples of existing authority for regulating AI.

Network and spectrum management: The EO encourages the FCC to consider actions it can take to improve efficiency in spectrum usage and expand measures for sharing non-federal spectrum as AI use increases use in overall bandwidth. This could be achieved through coordination with the National Telecommunications and information Administration to create opportunities and initiatives for spectrum sharing. The EO also directs the FCC to consider methods of providing support for efforts to improve security, resiliency, and interoperability through next-generation technologies which may form the foundation for widescale deployment of superfast wireless networks such as 6G.

Battle against robotic spam: The EO also seeks to combat much of the noise produced on these networks through robocall and robotext spam programs. The FCC is encouraged to consider implementing rules that will directly combat these programs that are assisted (and in many ways exacerbated) by the capabilities of AI.

7. Intellectual property

As part of the White House’s push towards innovation, the EO directs several mandates at clarifying the developing issues surrounding AI and its ability to create novel works.

Position of patents: The EO directs the US Patent and Trademark Office (PTO) to issue initial guidance that assists both patent examiners and applicants regarding the different ways in which generative AI may be involved in the inventive process and how related inventorship issues should be analyzed under current law. Subsequent guidance is also required as a follow-up to address other emerging issues and technologies and update the initial guidance, if needed. Notably, the EO does not require any guidance on trademark registration.

Clarifying copyright: The EO also requires the director of the PTO to consult with the director (Register of Copyrights) of the US Copyright Office to provide recommendations for further executive action, including potential protection for works created using generative AI and the treatment of copyrighted works in connection with training AI models. These recommendations are to be issued no later than 180 days after the US Copyright Office publishes its Study on AI and Copyright, which is open for public comments.

8. Health

Recognizing the potential for significant risks for the use of AI in certain critical fields, like healthcare, the EO provides for the following protections to safeguard against harms to patients and consumers. While this points to increased future regulatory guidance, rulemaking, and enforcement in the use and deployment of AI by healthcare and life sciences companies, the tremendous opportunity to improve healthcare, when appropriately and responsibly developed and deployed, is also recognized in the form of grant-funding opportunities.

New patient and consumer protection: The EO specifies that the DHHS will:

  • establish an HHS AI Task Force, which is directed to develop a strategic plan on the responsible deployment and use of AI in the health and human services sector.
  • develop an AI quality strategy, including development of an assurance policy and infrastructure needs for enabling pre-market assessment and post-market oversight of AI-enabled healthcare technology algorithmic systems.
  • consider appropriate actions to advance the prompt understanding of, and compliance with, federal nondiscrimination laws by health and human services providers that receive Federal financial assistance.
  • establish an AI safety program that sets out a common framework for identifying clinical errors resulting from AI, a central tracking repository for associated incidents causing harm, and recommendations for best practices and informal guidelines for avoiding harms.
  • develop a strategy for regulating the use of AI in the drug-development process.

Active enforcement of existing protections: In addition, the EO also directs the already active FTC to “enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, infringements on privacy, and other harms from AI,”[3]   to, among other things, protect patients from the misuse or errors caused by AI use in healthcare.

Prioritize HHS grants and awards: The EO directs the DHHS to identify and prioritize grantmaking and other funding awards to support responsible AI development and use. It specifically identifies advancement of AI-enabled tools that develop personalized immune-response profiles for patients, improvement of healthcare data quality to support development of AI tools for clinical care, real-world evidence programs, population health, public health, and related research, and grants aimed at AI programs for health equity in underserved communities. The EO also directs a variety of initiatives for the development of AI systems to improve the quality of healthcare for veterans.

9. Chemical, biological, radiological, and nuclear (CBRN) threats: To reduce risks of AI in relation to CBRN, the EO takes several notable actions, particularly with regard to biological weapons:

  • Department of Defense must enter into a contract with the National Academies of Sciences, Engineering and Medicine to conduct a study to assess the ways in which AI can increase biosecurity risks and the national security implications of using data associated with pathogens and “omics” studies, as well as recommending ways to mitigate identified risks.
  • Homeland Security will evaluate the potential for AI to be misused to enable CBRN threats, as well as consider potential benefits of AI to counter such threats. A report will be submitted to the President to describe the types of AI models that may present CBRN risks and make recommendations for regulating or overseeing the training, deployment or use of these models, including requirements for safety evaluations and guardrails.
  • Office of Science and Technology Policy is mandated to establish a framework to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable and verifiable synthetic nucleic acid procurement screening mechanisms. All agencies that fund life sciences research will require that synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to the framework, as a requirement of funding. The DOC has been tasked with leveraging the framework to develop specifications and best practices to support the technical implementation of such screening.
  • Department of Energy is mandated to develop AI model evaluation tools and AI testbeds in order to mitigate risks in models that generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards. The specific testbed is for the “purposes of guarding against these threats”[4]   and shall also develop model guardrails that reduce such risks.

10. Competitive practices and innovation

Beyond specific elements of the AI lifecycle, the EO acknowledges the potential harms and risk to national security raised through an overconcentration of control on key elements, such as access to data, computing power, and technology. In response, the EO implements several provisions specifically targeted at promoting innovation and competition within the industry.

Regulatory oversight: The EO requires that each agency developing policy and regulation related to AI shall use their authority to promote competitive practices within AI markets (as well as within other technology markets). Every agency under its remit is required to act as a monitor of the AI business sector, taking steps to prevent unlawful collusion and “prevent dominant firms from disadvantaging competitors.”[5]   The move comes in response to a growing concern of large organizations raising the bar for entry through substantial resources, thereby limiting smaller or less-established innovators. Where companies continue to act in an unfair and anti-competitive manner, the EO encourages the FTC to actively pursue organizations through enforcement of existing regulation or the development of new AI-specific measures.

Revitalized assistance programs toward AI: In further efforts to aid competition, the EO ascribes several measures targeted at directing funding and opportunities to small businesses and developers through the Small Business Administration. These programs, such as the Regional Innovation Cluster Program and the $2 million Growth Accelerator Fund Competition, will channel funding to smaller organizations and increase overall awareness of opportunities that developers and providers of AI may take advantage of in their pursuit of innovation, training, and technical development of AI. Beyond new measures, the EO requires that agencies review existing programs, such as the Technical and Business Assistance Funding Program, to determine whether they are fit for purpose. Where this is not the case, review and revision of eligibility criteria will be expected to follow.

Advancing the semiconductor industry: The EO has also focused its approach to furthering competition by implementing several initiatives directed at the Semiconductor Industry. The EO builds on existing initiatives created under the Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022 and directs the DOC to develop initiatives to encourage competition in the semiconductor sector. In particular, the DOC has been directed to implement flexible membership structures for the National Semiconductor Technology Center (including for startups and smaller organizations), implement mentorship programs to increase interest and participation in the industry, and increase the availability of resources for startups and small businesses so that they have access to the necessary assets to fairly compete.

11. Immigration

The EO acknowledges the necessity of skilled workers within the technology industry in pursuing safe, secure, and trustworthy AI and lays out several directives designed to attract and retain workers from abroad through the easing of several immigration processes.

Streamlining of immigration processes: The EO directs both the Department of State (DOS) and the DHS to streamline the visa process for immigrants who plan to work with AI or other critical technologies, including the modernization of pathways for AI experts to obtain visas. The EO also recommends that the DOS consider creating new rules to make it easier for foreign nationals on H-1B visas and those on temporary educational exchange programs to work with AI without “unnecessary interruption.”[6]

International recruitment: The EO sets its sights on workers outside of the US and requires that several agencies, including the DOS, to create programs to identify and attract top talent in AI across the industry, including universities, research institutions, and the private sector. This shall be accompanied by a clear and comprehensive guide by the DHS, DOC, and Director of the Office of Science and Technology Policy clearly detailing how experts may be able to work within the US and the potential benefits.

Regular reporting: The EO recognizes that these initiatives are of little value without an understanding of their wider impact. The EO therefore requires that agencies, including the DOC, issue public reports detailing information such as the current trend of use of the immigration process by AI experts and where gaps in current skillsets are likely to emerge.

12. Education

Development of resources for safe deployment: The EO directs the Department of Education to develop resources, policies, and guidance to address “safe, responsible, and non-discriminatory uses of AI in education”[7]  and inform parties of the impact AI may have on vulnerable and underserved communities.

Development of an AI Toolkit: The EO also requires the Department of Education to develop an ”AI Toolkit” to help education leaders implement the Department’s recommendations for AI use in educational settings. The EO highlights that use of AI in the education sector will require not only broadly applicable mitigations – for example, appropriate human review of outputs and compliance with privacy-related regulations – but also the development of “education-specific guardrails.”[8]

13. Housing

Prevention of bias and discrimination: The EO emphasizes the importance of preventing bias and discrimination in housing-related contexts. To address this concern, the EO encourages the Director of the FHFA – which oversees Fannie Mae and Freddie Mac – and the Director of the CFPB to “evaluate their underwriting models for bias or disparities affecting protected groups”[9]  and “evaluate automated collateral-valuation and appraisal processes in ways that minimize bias.”[10]

Guidance on existing legislation: The EO further requires the Department of Housing and Urban Development to issue additional guidance addressing the use of tenant screening systems and how existing laws and regulations (eg, the Fair Housing Act and Equal Credit Opportunity Act) apply to digital advertising for housing and credit and encourages the Director of the Consumer Financial Protection Bureau to undertake the same efforts.

14. Labor

Worker-centric approach: The EO makes clear that the Biden Administration is focused on the potentially negative impact of AI on the American workforce. The EO highlights the role of collective bargaining in ensuring that all workers have a “seat at the table”[11]  as AI is integrated into the workplace and cautions against applications of AI in ways that negatively impact job quality, result in undue worker surveillance, harm competition, or create health and safety risks, among other concerns.

Active agency intervention: In addition to promoting a worker-centric approach more generally, the EO requires several key agencies to take action within the next six months:

  • The Council of Economic Advisors is required to prepare a report for the President on “the labor-market effects of AI.”[12]
  • The Department of Labor (DOL) is instructed submit to the President a report analyzing the abilities of existing federal agencies to “support workers displaced by the adoption of AI and other technological advancements,”[13]  which must assess whether unemployment insurance and other programs could be used and identify other options or additional federal support for displaced workers.
  • The DOL is also instructed to develop and publish principles and best practices to help employers mitigate potential negative impact of AI in the workplace. These are required to include specific steps that employers can take and must address job displacement risks and career opportunities related to AI, job and workplace quality issues, and implications of employers’ AI-related collection of worker data.
  • The DOL is required to issue “guidance to make clear that employers that deploy AI to monitor or augment employees’ work must continue to comply with protections that ensure that workers are compensated”[14]  for hours worked per the FLSA.
  • The Director of NSF is required to prioritize available resources to support AI-related education and AI-related workforce development through existing programs.

Guidance on generative AI: With respect to the federal workforce, the EO also directs the Director of the Office of Personnel Management to develop guidance on the use of generative AI within 180 days as part of a wider initiative of leveling up federal employees and preparing them for the use of generative AI in their responsibilities.

15. What can we expect next?

Unlike the previous initiatives of the US government, the EO starts the clock for federal departments and agencies to act on the above mandates. From the date of the EO, each department will have roughly between three to twelve months to fulfill them. During this period, we can expect new standards to be issued and new guidance on existing enforcement measures across the federal departments.

The emphasis on testing and monitoring AI systems for harms should motivate companies to design and implement testing mechanisms well before these new requirements come into effect.

The EO and some of its directives will likely serve as the contours of eventual AI legislation in congress. Bipartisan lawmakers are currently seeking to balance the competing imperatives of acting quickly and getting it right.

Senate Majority Leader Chuck Schumer (D-NY) announced in mid-May that he was working with a small bipartisan core of his colleagues – identified as Senators Mike Rounds (R-SD), Todd Young (R-IN), and Martin Heinrich (D-NM), chair of the Senate AI Caucus – to craft legislation that can attract broad, bipartisan support. Schumer and Young had teamed up across the aisle on the initiative that ultimately was enacted into law last year – the CHIPS and Science Act – which included funding to boost US research and manufacturing of semiconductors.

In June, Senator Schumer announced a series of nine AI Insight Forums in the fall to help supplement and accelerate the traditional committee hearing process for the purposes of crafting comprehensive AI legislation.

In September, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) announced a bipartisan approach to regulation of AI focused on licensing, liability, and transparency. Senator John Thune (R-SD) is similarly anticipated to introduce bipartisan AI legislation with Senator Amy Klobuchar (D-MN) on impact assessments and certification that is perhaps the most comprehensive AI regulatory legislation to-date.

Despite the positive steps taken by both parties, progress continues at a slow pace, risking the process falling behind the rapid advancements of AI. It is therefore hoped by many in the industry that the EO will generate further momentum for Congress and propel the US towards a world leading AI regime that encourages investment and innovation, while ensuring those impacted by AI are protected.

DLA Piper’s AI practice hosted two members of the “AI Gang” – Senators Heinrich and Rounds – on September 19 in Washington, DC to discuss the AI Insight Forums and emerging AI legislation.

16. DLA Piper is here to help

As part of the Financial Times’ 2023 North America Innovative Lawyer awards, DLA Piper has been shortlisted for an Innovative Lawyers in Technology award for its AI and Data Analytics practice.

DLA Piper’s AI policy team in Washington DC is led by the Founding Director of the Senate Artificial Intelligence Caucus.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI Chatroom Series.

For further information or if you have any questions, please contact any of the authors.

 

[1] Sec. 4.2(c)(i).
[2] Sec. 4.2(c)(iii).
[3] Sec. 2(e).
[4] Sec. 4.1(b).
[5] Sec. 5.3(a).
[6] Sec. 5.1(b)(iii).
[7] Sec. 8(d).
[8] Sec. 8(d).
[9] Sec. 7.3(b)(i).
[10] Sec. 7.3(b)(ii).
[11] Sec. 2(c).
[12] Sec. 6(a)(i).
[13] Sec. 6(a)(ii).
[14] Sec. 6(b)(iii).

Print