Add a bookmark to get started

2 November 20238 minute read

Secure, safe, and trustworthy: Common ground between the US AI Executive Order and the EU AI Act

With the publication of the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the EO) on October 30, we can start to see in writing the areas of common ground and difference between the US approach to AI regulation and that set out in the most recent drafts of the EU’s AI Act.

A key distinction between the EO and the current drafts of the AI Act lies in their reach. Not waiting for legislative action to regulate private industry directly, the EO draws on the power of the Presidency to require primary executive departments across sectors to formulate consensus industry standards, guidelines, practices, and regulations for AI development and usage, expected to materialize in the upcoming months. This may create an inherent risk of divergent standards in different sectors, but a sectoral approach to regulation in the US has long been anticipated.

In contrast, the AI Act aims to establish a regulatory framework for artificial intelligence across the entire European Union, as a single horizontal regulation with direct impact for the private sector. As an EU Regulation, it will be directly applicable in all EU Member States, without the need of local implementation, save for aspects specifically addressed within the AI Act. While effective in harmonizing an approach to regulation of AI across the Member States, the approach has been criticized for its lack of flexibility and potential for regulatory gaps to emerge as technology advances.

Moreover, the EO predominantly focuses on standards and guidelines, while the AI Act enforces binding regulations, violations of which will incur fines and other penalties without further legislative action. While the measures in the AI Act appear more stringent on first reading, once the effect of additional legislative or regulatory action in the US under the EO is taken into account, the ultimate effect under both regimes could be similar in practice.

The AI Act will take a use-case based approach to regulating AI. From the outset, drafts have defined a small number of prohibited practices (such as using subliminal techniques to cause harm), and a longer list of ”high-risk” use cases. The list of high-risk uses includes use as part of a safety system for particular products (for example in a medical device or power tools), use in the context of recruitment or employment, use in education, and use in credit scoring for lending, as well as a short list of other cross-cutting uses.

For prohibited AI use cases, the Act is simple: any such use will be met with a fine. Use of AI in designated “high-risk” contexts requires a broad set of requirements to be met, including (i) having appropriate (documented) risk management frameworks in place; (ii) ensuring that AI is created using high-quality data sets and appropriate data governance regimes; (iii) keeping appropriate technical documentation; (iv) adhering to strict record-keeping standards; (v) ensuring transparency by design; (vi) ensuring AI is subject to human oversight; and (vii) ensuring the accuracy, robustness and security of AI systems. Deployment of high-risk AI, and (based on recent revisions) foundation models, will require conformity assessments, the use of quality assurance systems and ongoing monitoring.

The AI Act will also set out controls on the deployment of ”foundation models,” such as large language models or image generation AIs, which can be used in many different AI systems or workflows. Recently, there has been a move to more closely regulate so-called “very capable foundation models,” being the most complex and capable systems. Similarly, the EO pays special attention to so-called “dual-use foundation models” that “exhibit, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters” including cyber, chemical/biological/radiological/nuclear (CBRN) weapons, and deception/manipulation risks. Notably, foundation models are still “dual use” even if they have controls in place. Among the more significant actions in the EO are new requirements for “red-teaming” by AI foundation model developers (“especially” but not limited to dual-use foundation model developers), including, in some cases, requirements to submit the findings to the government as well as subsequent mitigation efforts taken by the developer. New “rigorous standards” for these red-team tests will be established by the USG to ensure safety before public release of systems that pose a serious risk to national security, national economic security, or national public health and safety.

Both the AI Act and the EO underscore system testing and monitoring across an AI system’s life cycle. The AI Act requires businesses to substantiate their compliance, incorporating comprehensive pre-market testing procedures (detailing the methods and criteria used for testing before launch) and a post-market monitoring policy (focusing on the developer’s monitoring of the system’s continued performance).

Likewise, the EO contemplates that “[t]esting and evaluations, including post-deployment performance monitoring, will help ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated in a secure manner, and are compliant with applicable Federal laws and policies.” While the EO specifically calls out “infrastructure needs for enabling pre-market assessment and post-market oversight of AI-enabled healthcare-technology algorithmic system performance against real-world data,” this singling out of healthcare may be an artifact of the sector-specific drafting and may portend similar requirements for other fields. Given that many companies will find themselves under both the AI Act and the EO, harmonizing these standards will be part of the task ahead.

Another mutual emphasis observed in both the AI Act and the EO is individual privacy protection, recognizing the potential intrusiveness of AI Systems. However, while the AI Act leverages the pre-existing GDPR framework, the EO, in the absence of nationwide privacy legislation in the US, necessitates the formulation of a relevant regime. Notably, neither document allows privacy legislation exceptions for AI System training, which could lead to major disputes and potential class actions in the future.

Both documents also mandate adherence to cybersecurity standards, promoting principles like security by design and a consistent performance emphasis. A unique focus of the EO is its proactive stance against the exploitation of extensive AI models by international malicious cyber entities—a concern less accentuated in the AI Act, although other EU laws currently in force and being framed (NIS2 Directive and the Cyber Resilience Act, for instance) may create similar obligations outside the AI Act.

Other unique elements of the EO include directing select government agencies to establish AI “testbeds” for the purposes of conducting rigorous, transparent, and replicable testing of tools and technologies in secure and isolated environments to help evaluate the functionality, usability and performance of AI systems. The EO also leverages federal procurement and grantmaking to influence industry standards and innovation by prioritizing AI in regional innovation programs and requiring certain standards upon AI developers for eligibility of award. While such government investment and procurement-based initiatives are absent in the AI Act, they are covered in other EU law.

In terms of intellectual property compliance, debates are ongoing regarding the AI Act's requirement for full disclosure of protected materials used in AI system training – an area identified as potentially dispute-prone. The EO, alternatively, advocates clarifying patent and copyright law boundaries concerning AI-supported creations, a perspective not explicitly tackled in the AI Act.

The EO also uniquely touches on broader political dimensions such as immigration, education, housing, and labor, areas not explicitly covered by the more compliance-focused AI Act. It addresses certain specific risks – such as risks associated with using AI to design biological materials – not directly addressed in the AI Act.

In sum, businesses operating globally might experience substantial overlap in compliance efforts to meet both the AI Act and Order’s requirements. However, at least at this early stage, the EU is leaning towards a more formal compliance demonstration through supportive documentation, while in the US approach, merely executing necessary activities in alignment with industry standards might in some cases suffice. Understanding the common ground that underpins these two superficially disparate regimes is key to developing a future compliance strategy that allows global organizations to align with both. Maybe it is not a coincidence that the EO was issued on the same day as when the G7 approved the AI Code of Conduct, which spells out principles present in both the AI Act and the Order. We will continue to monitor and advise as the world works towards harmonization across standards.

DLA Piper is here to help

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

For more analysis of the Order in particular, see our team’s separate comprehensive evaluation.

As part of the Financial Times’ 2023 North America Innovative Lawyer awards, DLA Piper has been shortlisted for an Innovative Lawyers in Technology award for its AI and Data Analytics practice.

DLA Piper’s AI policy team in Washington DC is led by the founding Director of the Senate Artificial Intelligence Caucus.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI Chatroom Series.

For further information or if you have any questions, please contact any of the authors.

Print