2 October 2025

California law mandates increased developer transparency for large AI models

A plethora of artificial intelligence (AI)-related bills awaits action from state legislatures and governors around the country, including several on the desk of California Governor Gavin Newsom, who signed one of them into law on September 29, 2025.

The Transparency in Frontier Artificial Intelligence Act (TFAIA), also known as SB 53, focuses largely on transparency and reporting requirements for developers of “foundation models” and larger “frontier models.” TFAIA goes into effect on January 1, 2026 for covered developers and will likely be the broadest public transparency law in effect for AI developers in the United States.

In this client alert, we provide an overview of the law’s key requirements and how they compare to other jurisdictions’ approaches to public transparency requirements for advanced AI models.

Who and what does the law cover?

The law covers “frontier developers,” who are developers that have “trained, or initiated the training of, a frontier model.” There are additional requirements for “large frontier developers,” who are developers that, together with their affiliates, “collectively had annual gross revenues in excess of $500 million in the preceding calendar year.”

A “frontier model” is a large foundation model, one that the TFAIA defines as being “trained using a quantity of computing power greater than 10^26 integer or floating-point operations [FLOPs].” That compute threshold is the same one used in the rescinded Biden-era Executive Order 14110 on foundation models, and it’s higher than the 10^25 FLOPs threshold that Regulation (EU) 2024/1689 (the EU AI Act) uses to subject general-purpose AI systems to greater regulatory requirements because of presumed systemic risk.

Foundation models” are AI models broadly defined in the law as being (1) trained on a broad data set, (2) designed for generality of output, and (3) adaptable to a wide range of distinctive tasks.

What is required of frontier developers?

Frontier AI frameworks

Large frontier developers must write, implement, and clearly and conspicuously publish on their websites a “frontier AI framework.” The framework comprises “documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks” of their frontier models.

The framework must describe how the developers approach incorporating national standards, international standards, and “industry-consensus best practices.”

Developers also must describe:

  • Thresholds used to identify and assess the model’s capabilities to pose a catastrophic risk

  • Mitigations used to address those capabilities

  • Decisions to deploy or use such models in light of the assessments and mitigations

  • The use of third parties for assessing risks and mitigations

  • How they revisit and update the framework

  • Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer

  • How they identify and respond to critical safety incidents

  • How they institute internal governance practices to ensure these processes are implemented, and

  • How they assess and manage catastrophic risk resulting from the model’s internal use, including risks from a model circumventing oversight mechanisms.

It is not sufficient simply to post the framework once. Instead, developers must review and update (as needed) the framework at least once a year. If a developer makes a “material modification” to its framework, it must “clearly and conspicuously publish” both the modified framework “and a justification for that modification within 30 days.” The law does not define “material modification.”

Transparency reports

Frontier developers must, by the time they deploy any new or substantially modified frontier model, publish on their websites – clearly and conspicuously – a “transparency report.”

The report must contain, at a minimum: (1) the developer’s website, (2) a mechanism enabling someone to communicate with the developer, (3) the model’s release date, (4) the languages and output modalities supported by the model, (5) the model’s intended uses, and (6) any generally applicable restrictions or conditions on the model’s uses.

Large frontier developers must also include in the report summaries of the steps taken to fulfill the requirements of the frontier AI framework for that frontier model, including assessments of catastrophic risks, the results of those assessments, and the extent to which third-party evaluators were involved.

Catastrophic risks

Large frontier developers must transmit to the Office of Emergency Services (OES) a confidential summary of any assessment of “catastrophic risk” resulting from a frontier model’s internal use.

“Catastrophic risk” is defined as “a foreseeable and material risk” that a frontier model “will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident.”

Such incidents are limited to those involving the model:

“(A) Providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon.

(B) Engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense.

(C) Evading the control of its frontier developer or user.”

The definition excludes: (1) risks from information that “is otherwise publicly accessible in a substantially similar form from a source other than a foundation model,” (2) lawful federal government activity, and (3) harm to which the model did not materially contribute but was caused by the model “in combination with other software.”

Critical safety incidents

Frontier developers must report “critical safety incidents” involving a frontier model to OES within 15 days of discovering the incident, unless the incident “poses an imminent risk of death or serious physical injury,” in which case the developer shall disclose it “within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law.”

“Critical safety incidents” are defined to mean:

“(1) Unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury.

(2) Harm resulting from the materialization of a catastrophic risk.

(3) Loss of control of a frontier model causing death or bodily injury.

(4) A frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.”

OES must establish a mechanism to allow not only the developers but also the public to report such incidents. Starting in 2027, OES will publish annual reports about critical safety incidents, using anonymized and aggregated information. The law also includes provisions allowing developers to follow, in certain circumstances, applicable federal reporting laws or guidelines as a means of complying with TFAIA’s incident reporting requirements.

Materially false or misleading statements

The new law forbids frontier developers from making “materially false or misleading statements” relating to catastrophic risk. Large frontier developers are also prohibited from making such statements relating to the frontier AI framework. These prohibitions don’t apply to a statement “made in good faith and was reasonable under the circumstances.”

What else is in the law?

Whistleblower protections

Frontier developers must not prevent (whether by regulation, policy, or contract) a covered employee from disclosing information that: (1) the developer’s activities pose a specific and substantial danger to public health or safety resulting from a catastrophic risk or (2) the developer has violated the TFAIA.

Covered employees are those “responsible for assessing, managing, or addressing risk of critical safety incidents.” To be protected, such employees need “reasonable cause” for their beliefs about such dangers or violations, and the disclosures must be to the Attorney General; a federal authority; persons with authority over the employee; or other covered employees with authority to investigate, discover, or correct the reported issue.

Large frontier developers must also provide a “reasonable internal process through which a covered employee may anonymously disclose” such information to the developer itself.

Starting in 2027, the Attorney General will produce a report to the Legislature and the Governor with anonymized and aggregated information about these employee disclosures.

This section of the law is the only one that covers all foundation models, and not just frontier-size ones, and the only one that allows for private civil actions or administrative proceedings for violations.

State consortium

The law establishes within the Government Operations Agency (GOA) a consortium that is required – if funds are legislatively appropriated – to develop a framework for creation of a public cloud computing cluster (CalCompute). The idea is to advance AI development and deployment that is “safe, ethical, equitable, and sustainable,” in part by fostering publicly beneficial research and innovation. The consortium will likely be associated with the University of California and would dissolve after issuing a report on the framework, due by January 1, 2027.

How will the TFAIA be enforced?

The Attorney General can seek a civil penalty for noncompliance, but only against large frontier developers that fail to publish or transmit a compliant, TFAIA-required document; make a materially false or misleading statement as described above; fail to report a critical safety incident as required; or fail to comply with their own frontier AI framework. The penalty is capped at $1 million per violation.

How does this law relate to other California AI laws that apply to developers?

The California AI Training Data Transparency Act requires that the developer of a generative AI system or service must post on the developer’s website documentation regarding the data used to train that system or service. Signed in 2024 and taking effect on January 1, 2026, this law is limited to systems or services that are made publicly available for Californians to use. A developer may thus sometimes be subject to the transparency requirements of both laws for AI models that meet the definition of a “frontier model” in TFAIA and “generative artificial intelligence” in the AI Training Data Transparency Act.

How does this law compare to AI developer transparency requirements in other jurisdictions?

The principal comparison outside the United States is likely to the EU AI Act, which has robust requirements for providers (typically the developers) of general-purpose AI (GPAI) models to draft, maintain, and make available certain information. For example, providers must establish policies regarding public disclosure of risk management material such as impact assessments, audits, model documentation and validation, and testing results. As noted above, GPAI models that pose systemic risk are subject to more comprehensive compliance requirements, similar to the way that large frontier models are treated in TFAIA.

In the United States, the closest comparison may be to a narrower provision in the Colorado AI Act, requiring developers to include and regularly update a readily available statement on their website summarizing the types of high-risk AI systems they have developed (or intentionally and substantially modified), and how they manage known or reasonably foreseeable risks of algorithmic discrimination from the high-risk AI systems.

What are the larger takeaways?

Of course, not every bill makes it to the finish line. Of the many AI-related state bills trekking through the process, only some will become law. Like TFAIA, though, some of those new laws will change substantially what AI companies are required to disclose to the public. Such transparency requirements can have national impact, given that everyone, and not just Californians, can see what’s on a developer’s public website.

Developers of AI models that may fall under TFAIA are encouraged to take immediate steps to determine the extent to which they need to create and post frontier AI frameworks or transparency reports on their websites, and whether they need to report anything relating to catastrophic risks or critical safety incidents to the California Office of Emergency Services.

Find out more

DLA Piper’s team of AI lawyers, data scientists, and policy specialists helps organizations navigate the complex workings of their AI systems and comply with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impact on industry across the world.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI strategy through our AI ChatRoom series.

For further information or if you have any questions, please contact any of the authors.

Print