15 December 2025

New Executive Order aims to preempt state AI regulation: Top points

On December 11, 2025, President Donald Trump signed an Executive Order (EO) titled, “Ensuring a National Policy Framework for Artificial Intelligence.”

The EO is part of a White House effort to streamline artificial intelligence (AI)-related laws and create a national standard. The order notes that a patchwork of state laws could lead to burdensome compliance regimes and harm innovation necessary for global AI leadership. Congressional attempts to preempt or impose a legislative moratorium on state AI laws have faced bipartisan criticism, as well as pushback from some governors.

On November 20, 2025, an early draft of the EO was leaked, and then apparently paused, as Congress again failed to pass a moratorium. Then, on December 8, 2025, President Trump and his Special Advisor for AI and Crypto, David Sacks, issued social media posts indicating that an EO would be signed within days.

In this client alert, we unpack the new EO, outlining which agencies have been directed to take action and the implications for high-stakes disputes over the constitutionality of the directive and federal actions that may be taken pursuant to it.

Policy, purpose, and plan

The EO declares that US policy is “to sustain and enhance America’s global AI dominance through a minimally burdensome, uniform national policy framework for AI.” According to the EO, the Trump Administration has advanced this policy by removing barriers to AI innovation and adopting the technology across the federal government. To further these efforts, the EO directs federal agencies to turn their attention to states with bills and laws that conflict with that policy.

Specifically, the EO focuses on what the Trump Administration identified as a patchwork of “onerous and excessive” state requirements – some of which, according to the order, impinge on interstate commerce or make entities “embed ideological bias within models.” Here, the EO refers explicitly to one state law, the Colorado AI Act, which “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” While the leaked draft also critiqued California’s new Transparency in Frontier Artificial Intelligence Act (TFAIA), Colorado is the only state referenced in the final EO.

The order summarizes a two-pronged strategy to deter states engaging in certain AI-related legislative efforts. It includes (1) suing or withholding funds from states and (2) establishing a light federal framework of AI regulation. Presumably, the latter could help to bolster preemption arguments to be used in federal cases against states.

AI litigation Task Force

The EO directs the Attorney General to create a Task Force to challenge state AI laws, “including on grounds that such laws unconstitutionally regulate interstate commerce [or] are preempted by existing Federal regulations.” The Task Force shall consult with the Special Advisor for AI and Crypto, the Assistant to the President for Science and Technology, the Assistant to the President for Economic Policy, and the Assistant to the President and Counsel to the President.

Evaluation of state AI laws

The EO directs the Secretary of Commerce, in consultation with the same officials, to publish an evaluation of state AI laws that either conflict with stated US policy on AI or that may be unlawful. In particular, the Secretary must identify laws that “require AI models to alter their truthful outputs” or that compel AI developers or deployers to disclose or report information in violation of the First Amendment or other provisions of the Constitution.

State funding restrictions

The Secretary of Commerce is also directed to issue a Policy Notice regarding state eligibility for remaining funds in the Broadband Equity Access and Deployment Program. The Policy Notice will specify that any states with burdensome laws identified in the Secretary’s published evaluation of state AI laws are ineligible for such funds.

Separately, the EO instructs all federal agencies to assess discretionary grant programs and determine if they can condition grants on whether states (1) agree not to enforce any such identified laws or (2) not pass any new laws deemed contrary to the EO.

Federal reporting and disclosure standards

After the Secretary of Commerce publishes the requisite list of state AI laws, the Federal Communications Commission (FCC) Chairman must, in consultation with the Special Advisor for AI and Crypto, “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempt conflicting State laws.” The EO does not clarify which entities would fall under such a standard or what specific reporting and disclosure is contemplated.

Preemption of laws mandating deceptive AI conduct

The Federal Trade Commission (FTC) Chairman is required, again in consultation with the Special Advisor for AI and Crypto, to “issue a policy statement on the application of the FTC Act’s prohibition on unfair and deceptive acts or practices under 15 U.S.C. 45 to AI models.”

The EO further states that the FTC’s policy statement should discuss state laws that “require alterations to the truthful outputs of AI models,” a possible reference to the Colorado AI Act. The statement must explain the circumstances under which such laws are preempted by the FTC Act’s prohibition on engaging in deceptive commercial practices. That language suggests that the FTC Act could preempt a state law that effectively requires a product to be untruthful to consumers, because that requirement would be inconsistent with the Act’s prohibition on consumer deception.

Legislation

The final substantive provision of the EO directs the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology, Michael Kratsios, to give the President “a legislative recommendation establishing a uniform Federal regulatory framework for AI that preempts State AI laws that conflict with the policy set forth in this order.”

While it is unclear what the contemplated framework will contain, a few clues arise from statements in the EO that were absent from the leaked draft. First, the EO states that the framework should “ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.” Echoing that language, the EO also notes that the federal legislative recommendation “shall not propose preempting otherwise lawful State AI laws relating to: (i) child safety protections; (ii) AI compute and data center infrastructure, other than generally applicable permitting reforms; (iii) State government procurement and use of AI; and (iv) other topics as shall be determined.”

The multiple references to child protection may be a nod to bipartisan concerns raised about chatbot-related harms. Meanwhile, the broad reference to safeguarding communities may be tied to the more specific direction that states be allowed to assert local control over data centers – another issue that has received bipartisan attention. The call for the framework to respect copyrights is notable, given President Trump’s July 2025 statement that the AI industry should not be required to pay for using content such as articles and books.

Key takeaways

Like Congressional efforts to pass a moratorium on state AI laws, this EO and the actions taken pursuant to it will likely be controversial among lawmakers and industry. Task Force litigation could raise key questions regarding federalism and constitutional principles. Meanwhile, some state officials may consider tabling or modifying pending AI-related bills, or refrain from enforcing existing laws, in light of the EO.

Indeed, such a state action may have already occurred. Also on December 11, New York Governor Kathy Hochul took action on the Responsible AI Safety and Education Act (RAISE) Act, a significant AI developer transparency bill the New York state legislature passed in June 2025. Instead of signing or rejecting it, Governor Hochul returned it to the legislature with proposed changes that would reduce developer obligations and make the bill almost identical to California’s TFAIA. This development is notable given that negative references to TFAIA were removed from the final version of the EO. It could also signal a narrowing path for key lawmakers on what constitutes broadly acceptable state-level AI regulation in the US.

Aside from the federal–state challenges, it remains to be seen how the contemplated federal regulatory framework will develop. For more information on the EO’s implications, please contact any of the authors.

Print