23 March 2026

White House releases the National Policy Framework for Artificial Intelligence: Key points

On March 20, 2026, the White House released a document titled, “A National Policy Framework for Artificial Intelligence,” setting out a framework of legislative recommendations related to artificial intelligence (AI). 

In an accompanying announcement, the White House noted that it looks forward to working with Congress to advance corresponding legislation in the next few months. 

The legislative framework covers a broad array of topics, such as:

  • Preventing AI-related harm to children

  • Streamlining data center construction while protecting consumers from higher energy rates

  • Protecting consumers from AI-enabled scams

  • Mitigating national security concerns arising from frontier AI models

  • Protecting copyright holders while balancing AI developer needs

  • Preventing censorship and free-speech violations

  • Building accessible AI testing environments

  • Assisting American workers with AI-related training

The White House announcement also states that the new federal laws should preempt any conflicting state laws. 

The framework’s release follows the similarly titled Executive Order (EO) 14365, “Ensuring a National Policy Framework for Artificial Intelligence,” which was issued December 11, 2025 and discussed in a prior DLA Piper alert. In that EO, the White House instructed federal officials to develop “a legislative recommendation establishing a uniform Federal regulatory framework for AI that preempts State AI laws that conflict with the policy set forth in this order.” The EO referred to several topics that the framework should cover, all of which are covered in the framework released on March 20, 2026. 

In this client alert, we unpack the framework’s contents and discuss the landscape for advancing the legislative recommendations.

The framework’s sections

Protecting children and empowering parents

Recognizing public concerns with children’s use of AI tools, the framework calls for Congress to ensure that AI companies take measures that empower parents to exercise control over what their children can access. The White House seeks a legislative mandate for companies to 1) provide parents with “robust tools” to manage privacy settings, screen time, and content exposure; 2) establish “age-assurance requirements”; and 3) implement features to reduce risks of sexual exploitation and self-harm. 

Further, the White House asks for Congress to affirm that relevant child privacy protections apply to AI systems, avoid standards with litigation-inducing ambiguities, and avoid preempting generally applicable state laws that protect children.

Safeguarding and strengthening American communities

This section first seeks to balance corporate interests in domestic building of AI data centers with publicly expressed concerns about energy costs being passed on to consumers. Specifically, while it calls for Congress to streamline federal permitting for AI infrastructure construction, it also seeks effective codification of the voluntary Ratepayer Protection Pledge by a legal requirement “that residential ratepayers do not experience increased electricity costs as a result of new AI data center construction and operation.” 

Separately, the White House asks Congress to “augment existing law enforcement efforts to combat AI-enabled impersonation scams and fraud that target vulnerable populations such as seniors.” This direction dovetails with a March 6, 2026 EO that focuses on consumer fraud and mentions impersonation scams and at-risk citizens.

The framework briefly addresses national security considerations with frontier AI models by calling on Congress to ensure that relevant agencies have enough technical capacity to mitigate such concerns, “including through consultation with frontier AI model developers.” 

To address the ability of small businesses to deploy AI tools, the White House says that Congress should provide them resources “such as grants, tax incentives, and technical assistance programs.”

Respecting intellectual property rights and supporting creators

In recent years, courts have seen a growing number of cases involving the use of copyrighted materials in training AI models. Although the White House says in the framework that it believes such training to be legal, it acknowledges the ongoing legal debate and advises Congress to leave decisions to the courts. At the same time, the framework acknowledges the interests of creators and publishers, suggesting that Congress enable, but not require, “licensing frameworks or collective rights systems for rights holders to collectively negotiate compensation from AI providers, without incurring antitrust liability.” 

Congress is also instructed to consider how to protect individuals “from the unauthorized distribution or commercial use of AI-generated digital replicas of their voice, likeness, or other identifiable attributes,” without stifling free speech online. States have already been active in this area, with Washington Governor Bob Ferguson having just signed a digital replica law on March 16, 2026.

Preventing censorship and protecting free speech

Echoing EO 14319 of July 23, 2025, “Preventing Woke AI in the Federal Government,” the framework instructs Congress to prevent the US government itself “from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.”

The framework also calls on Congress to create a means “for Americans to seek redress from the Federal Government for agency efforts to censor expression on AI platforms or dictate the information provided by an AI platform.” The Trump Administration has previously expressed similar concerns regarding federal government censorship.

Enabling innovation and ensuring American AI dominance

In an effort to legislatively implement two elements of the White House’s July 2025 AI Action Plan, the framework asks Congress to 1) establish regulatory sandboxes for AI applications and 2) provide resources to make federal datasets accessible to industry and academia for use in building AI models and systems. These elements, the White House says, will help remove barriers to innovation and accelerate AI deployment across sectors.

Although Congress has not advanced a bill to create a new regulatory agency for AI, the framework nonetheless cautions Congress against doing so. Instead, the White House notes that Congress should “support development and deployment of sector-specific AI applications through existing regulatory bodies with subject matter expertise and through industry-led standards.”

Educating Americans and developing an AI-ready workforce

The framework stresses the importance of American workers benefiting from AI development, pointing to the need for skills training and new jobs. This message connects directly with the Department of Labor’s recent release of its AI Literacy Framework. In this section, rather than seek new laws, the White House asks Congress to support: 

  • Methods to ensure AI training will be incorporated in relevant education and workforce training programs

  • Federal study of “trends in task-level workforce realignment driven by AI” 

  • AI-related capabilities of land-grant institutions

Establishing a federal policy framework, preempting cumbersome state AI laws

The last section addresses the principal concern expressed in the December 2025 EO – namely, a patchwork of state laws the White House identified as a hindrance to innovation. Repeating the themes of that EO, the White House says that Congress should thus “preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations.”

The framework says – again repeating themes discussed in the December 2025 EO – that the national standard should be targeted in its preemptive effects. It notes that the standard should respect traditional state police powers, particularly laws of general applicability that “protect children, prevent fraud, and protect consumers.” Per the framework, the standard should not impinge on zoning laws that may “determine the placement of AI infrastructure” or on a state’s own use of AI.

According to the framework, Congress should instead focus on any state law that attempts “to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications.” Although the framework does not point to any particular state laws, the White House may be referring here to bills and laws it has noted in the past, such as the Colorado AI Act, which was described in the December 2025 EO and is set to take effect on June 30, 2026. This direction comes as we continue to see states consider AI-related legislation, such as Utah’s recent HB 286, and follows the 2025 passage of New York’s RAISE Act and California’s similar Transparency in Frontier Artificial Intelligence Act – which we compared in a recent client alert.

The framework also delivers to Congress broader preemption principles, stating in general terms that state laws should not cover areas “better suited to the Federal Government” or be contrary to the “national strategy to achieve global AI dominance.” It further asserts that states should avoid undue burdens on using AI “for activity that would be lawful if performed without AI.” 

Finally, siding with AI developer concerns about downstream liability, the framework declares that states “should not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models.” It is unclear how the requested national standard would specifically address this issue. However, the scope and wording of any such liability shield could serve as a point of contention in Congress. 

Legislative landscape 

Various factors will likely shape whether Congress advances a broad package of laws establishing a national AI framework in the timeframe the White House requests. Advancing these recommendations would require Congress to consider multiple AI related bills addressing different subject areas, each of which would be subject to the standard legislative processes of drafting, committee review, debate, and voting. In addition, consideration of such legislation would occur during an election year, which can influence the pace and scope of congressional legislative activity. 

Passage of any such legislative package would require support across chambers, including in the Senate. The range of policy areas addressed in the framework, such as child safety and ratepayer protections, have historically drawn interest from lawmakers across parties. The organization of the framework reflects this breadth, addressing consumer and family focused topics earlier in the document, with more structurally complex issues, such as state preemption, addressed later. A similar emphasis appears in the accompanying announcement, which notes that “some Americans feel uncertain about how this transformative technology will affect issues they care about, like their children’s wellbeing or their monthly electricity bill.”

Regardless of whether the framework will lead to passage of a broad legislative package, piecemeal laws, or no further action, the document’s broad reach and content may reflect a policy maturation process. As compared to earlier legislative frameworks and federal statements about AI, the framework addresses topics with more granularity and with increased reference to public sentiment about AI’s impact.

For more information, please contact the authors.  

 
Print