Add a bookmark to get started

Programming code
29 February 202411 minute read

DOJ announces proposed rule to mitigate data security risks related to AI

On February 28, 2024, President Joe Biden issued an Executive Order titled “Preventing Access to Americans’ Bulk Sensitive Personal Data and US Government-Related Data by Countries of Concern” (EO). This order seeks to restrict the sale of sensitive American data to China, Russia, Iran, North Korea, Venezuela and Cuba (Countries of Concern) to prevent those countries from accessing personally identifiable information for purposes of blackmail, surveillance, and the misuse of artificial intelligence (AI) to target Americans.

On the same day, the US Department of Justice (DOJ) announced that it will be implementing a “Groundbreaking Executive Order Addressing National Security Risks and Data Security” in response to and as directed by the EO. Shortly after, DOJ’s National Security Division (NSD) issued an Advanced Notice of Proposed Rulemaking (ANPRM) outlining how that implementation may take shape.

DOJ’s announcement is consistent with other steps that they have taken to broadcast that they are focused on the safety of American citizens’ sensitive personal data which faces “unusual and extraordinary” threats by malign states particularly given the power of AI.

The EO is a significant enforcement tool

Overview of the EO

The EO is a significant enforcement tool that will allow DOJ to combat crimes perpetuated or enhanced by AI, particularly those committed by foreign adversaries. Specifically, it stresses that the continuing efforts of Countries of Concern to access Americans’ sensitive personal data and US-government-related data constitute an “unusual and extraordinary threat,” as it increases the ability of Countries of Concern to engage in a wide range of malicious activities facilitated by the use of AI.

Notably, the EO refers to the ability of the Countries of Concern to rely on advanced technologies like AI to both:

  1. Analyze and manipulate bulk sensitive personal data to engage in espionage, influence, kinetic, or cyber operations, or to identify other potential strategic advantages over the US, and
  2. Fuel the creation and refinement of AI and other advanced technologies, thereby improving their ability to exploit the underlying data and exacerbating the national security and foreign policy threats.

As Assistant Attorney General Matthew Olsen commented, this new enforcement tool will allow NSD to prosecute “hostile foreign powers [that] are weaponizing bulk data and the use of artificial intelligence to target Americans.”

NSD released a draft version of the ANPRM on February 29, 2024, which describes the initial categories of transactions involving bulk sensitive personal data or certain US-government-related data, and seeks public comment on, among other things, activity DOJ contemplates regulating, data brokerage, transfers of genomic data, and vendor, employment, and investment agreements. NSD expects to adopt the EO’s definition of Countries of Concerns, and Russia, Iran, North Korea, Cuba, and Venezuela are likely to be on the list.

Scope of the EO

The EO has two components, and instructs DOJ, in consultation with other agencies, to issue regulations that identify:

  • Specific classes of transactions – for prohibition – that may enable access to defined categories of Americans’ bulk sensitive personal data or government-related data by Countries of Concern or covered persons that pose an unacceptable risk to US national security and foreign policy (Prohibited Transactions). These Prohibited Transactions would be those that are highly sensitive and would be prohibited in their entirety.[1]
  • Specific classes of transactions that will be required to comply with security requirements that mitigate the risks of access to Americans’ bulk sensitive personal data or government-related data by Countries of Concern (Restricted Transactions). These Restricted Transactions are those that would be prohibited, except to the extent they comply with predefined security requirements.[2] These security requirements could include: (i) organizational requirements, such as basic organizational cybersecurity posture; (ii) transaction requirements, such as minimization and masking, use of privacy preserving technologies, requirements for information technology systems to prevent unauthorized disclosure, and logical and physical access controls; and (iii) compliance requirements, such as audits.[3]

The EO covers bulk US sensitive personal data and US-government-related data, regardless of volume. A draft version of the ANPRM suggests that DOJ is considering including six defined categories of bulk US sensitive personal data:

  1. US persons’ covered personal identifiers
  2. Personal financial data
  3. Personal health data
  4. Precise geolocation data
  5. Biometric identifiers, and
  6. Human genomic data[4]

The draft ANPRM also suggests two kinds of US-government-related data, regardless of their volume: (i) geolocation data in listed geofence areas associated with certain military, other government, and other sensitive facilities; and (ii) sensitive personal data that is marketed as linked or linkable to current or recent former employees, contractors, or former senior officials, of the US government, including the military and intelligence community.

The EO seeks to balance privacy and national security while protecting democratic values and promoting an “open, global, interoperable, reliable, and secure Internet; protecting human rights online and offline; supporting a vibrant, global economy by promoting cross-border data flows required to enable international commerce and trade; and facilitating open investment.”

Accordingly, the EO cautions DOJ to refrain from establishing General Data Protection Regulation (GDPR)-like generalized data localization requirements to store bulk sensitive personal data or US-government-related data within the US. They also caution against locating computing facilities used to process bulk sensitive personal data or US-government-related data within the US, asserting that the national security restrictions are “specific, carefully calibrated actions.” To that end, the EO expressly excludes those transactions that are ordinarily incident and part of the provision of financial services, including banking, capital markets, and financial insurance services, or required for compliance with any Federal statutory or regulatory requirements.

DOJ is also considering exempting intra-entity transactions that are incident to business operations. That is, data transactions that are (i) between a US person and its subsidiary or affiliate located in (or otherwise subject to the ownership, direction, jurisdiction, or control of) a Country of Concern, and (ii) ordinarily incident to and part of ancillary business operations. These may include the sharing of employees covered personal identifiers for human-resources purposes, payroll transactions, sharing data with auditors and law firms for regulatory compliance.[5]

Under the EO, DOJ is also expected to establish, in consultation with other agencies, mechanisms to provide additional clarity to persons affected by the EO and any regulations and a process to issue, modify, or rescind licenses authorizing transactions that would otherwise be Prohibited Transactions or Restricted Transactions. DOJ’s proposed rules will also address the need for recordkeeping and reporting of transactions to inform investigative, enforcement, and regulatory efforts, and will establish thresholds and due diligence requirements for entities to use in assessing whether a transaction is a Prohibited Transaction or a Restricted Transaction.

DOJ’s deployment of “Justice AI”

On February 22, 2024, the Attorney General announced the designation of Jonathan Mayer, a computer science professor at Princeton University, as DOJ’s First Chief Science and Technology Advisor and Chief Artificial Intelligence (AI) Officer. Mayer will serve in DOJ’s Office of Legal Policy and provide technical insight to the Attorney General and DOJ’s leadership on cybersecurity, AI, and other areas of emerging technology. He will also oversee DOJ’s efforts to build a team of technical and policy experts in technology-related areas, including cybersecurity and AI.

As the Chief AI Officer, Mayer will lead DOJ’s Emerging Technology Board. During remarks at Oxford University on February 14, 2024, Deputy Attorney General Lisa Monaco explained that the Emerging Technology Board will advise her and the Attorney General on the responsible and ethical uses of AI by DOJ and spearhead “Justice AI,” a new initiative that will take place in the next six months.

According to Monaco, under this new initiative, DOJ will meet and seek to engage with a diverse pool of individuals across civil society, academia, science and industry to understand and prepare for “how AI will affect the Department’s missions and how to ensure [DOJ] accelerate[s] AI’s potential for good while guarding against its risks.”

AI as DOJ’s top enforcement priority

Monaco touched on the challenge of AI, which is that while DOJ has leveraged AI to combat crime, AI has also accelerated security risks and provided a powerful tool to criminals. DOJ appears to be taking steps to ensure that these negative implications do not surpass the benefits.

Monaco stressed that AI has the potential to “be an indispensable tool to help identify, disrupt, and deter criminals, terrorists, and hostile nation-states from doing us harm.” She also cited a few examples where DOJ deployed AI to strengthen its work, which included (i) classifying and tracing the source of opioids and other drugs; (ii) triaging and understanding tips submitted to the Federal Bureau of Investigation (FBI); and (iii) synthetizing large volumes of evidence collected in the DOJ’s most significant cases, including the January 6, 2021, Capitol cases.

Monaco also stressed the risks and threats that AI present, as it can enhance the danger of a crime. This is particularly relevant with respect to the upcoming elections in the US and abroad, discrimination, price fixing, and identify theft – areas where the US criminal justice system has long applied increased penalties. As a result, DOJ will likely deploy creative theories to impose stricter sentences for those offenses made significantly more dangerous by the misuse of AI, in advance of any possible reforms to the existing US Sentencing Guidelines.

Key takeaways

These recent pronouncements signify that DOJ is very focused on AI, and that it has become a key enforcement priority. There is uncertainty about the timing and details of DOJ’s new AI-focused measures and the proposal of new regulations to govern AI. Nevertheless, companies may consider taking some interim steps to prepare for the regulatory and enforcement changes on the horizon:

  1. Assessing flow of data transfers and conducting preliminary risk-assessments regarding whether these transfers would be considered Prohibited Transactions or Restricted Transactions.
  2. Reviewing existing security measures to ensure that they address national-security and foreign policy threats, and that they implement the three requirements previewed in the ANPRM.
  3. Reviewing agreements that relate to transactions falling within the scope of the Restricted Transactions to ensure that they address those security measures.
  4. Reviewing agreements involving data transfers that fall outside the scope of the Prohibited and Restricted Transactions and ensuring that they restrict the counterparty’s ability to engage in Prohibited and Restricted Transactions.
  5. Reviewing the existing compliance program to ensure that it addresses and protects against national security threats, and considering preparing new operating policies and procedures to ensure compliance with the upcoming regulations regarding Prohibited and Restricted Transactions.
  6. Considering designing and implementing a company-wide AI policy to govern use. As the US government increasingly regulates AI and DOJ increases enforcement, it will be important for companies to have clear policies and procedures in place that help govern the use of AI in the workplace.
  7. Conducting a risk assessment to determine vulnerabilities and areas of AI-related risk.
  8. Monitoring AI-related enforcement actions. DOJ and regulatory agencies publish enforcement actions as a way to deter certain activity and put the market on notice. They are also a good indicator of enforcement priorities, highlighting where companies, for example, have faltered.

DLA Piper’s AI and White-Collar Defense and Global Investigations teams work collaboratively to advise businesses on cutting edge AI and enforcement considerations and will continue to monitor these developments closely. Please reach out to any of the authors for more information.


[1] A draft version of the ANPRM suggests that DOJ is considering identifying two classes of Prohibited Transactions: (i) data brokerage transactions; and (ii) any transaction that provides a Country of Concern or covered person with access to bulk human genomic data or human biospecimens from which that human genomic data can be derived. The draft version of the ANPRM is available at: unofficial_signed_anprm.pdf (justice.gov).
[2] The draft version of the ANPRM also identifies three classes of Restricted Transactions that DOJ is considering, to the extent they involve Countries of Concern or covered persons and bulk US sensitive personal data: (i) vendor agreements, including agreement for technology services and cloud-services agreements; (ii) employment agreements; and (iii) investment agreements. See Id.
[3] See Id.
[4] See Id.
[5] See Id.

Print