
22 December 2025
The RAISE Act: New York joins California in requiring developer transparency for large AI models
New York Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act on December 19, 2025, marking a major legal development in the world of artificial intelligence (AI).
The law focuses on AI safety and imposes transparency obligations on large developers of advanced AI models. As such, the law will stand as a bicoastal pillar with California’s similar and recently enacted Transparency in Frontier Artificial Intelligence Act (TFAIA), potentially creating a de facto national standard in this area.
Upon signing the RAISE Act, Governor Hochul indicated that the Legislature had agreed to several amendments to be passed in the next session, which are intended to align the law more with TFAIA. It appears that this agreement on the RAISE Act includes an effective date of January 1, 2027, whereas TFAIA takes effect on January 1, 2026.
The RAISE Act and TFAIA are set to provide the strongest and broadest AI developer transparency mandates in the United States. Their passage is notable following the recently issued Executive Order (EO) that seeks paths to preempt AI-related state laws and congressional efforts to impose a moratorium on such laws – federal activity described in a recent DLA Piper client alert.
As noted in that alert, the final version of the EO had excised negative references to TFAIA found in an earlier draft. It is thus possible that these two new state laws will influence the contours of the federal framework that the EO directs federal officials to consider.
In this client alert, members of DLA Piper’s AI and Data Analytics team provide an overview of the RAISE Act’s key requirements – including what will remain or change – and how these requirements compare to developer obligations under TFAIA.
Who and what does the law cover?
The RAISE Act covers “large developers,” who are persons that have 1) trained at least one frontier model with compute costs exceeding $5 million and 2) spent more than $100 million in aggregate compute costs in training frontier models. Accredited colleges and universities are exempted if engaging in academic research. It has been reported, however, that Governor Hochul’s deal with legislators will result in later amendments that replace the compute cost figures with a threshold based on developer revenue. According to another report, that change would align the definition with TFAIA’s definition of “large frontier developers,” who are frontier developers grossing more than $500 million in annual revenue.
A “frontier model” is defined in the RAISE Act as either 1) an AI model trained using greater than 10^26 integer or floating-point operations, with compute costs exceeding $100 million, or 2) an AI model trained via “knowledge distillation,” a technique involving the use of a frontier model to train a smaller model. TFAIA’s definition of “frontier model” is the same as in the RAISE Act in terms of computing operation size, but it does not include the “knowledge distillation” concept.
Frontier models are covered only if they “are developed, deployed, or operating in whole or in part in New York state.” In contrast, TFAIA contains no specific language relating to developer activity within California.
It is unclear whether, when these laws are both in effect, any developers would wind up being covered by one law and not the other. According to one article, the new law could have an expansive scope over some of the existing large models.
What is required of large developers?
Safety and security protocol
Before deploying a frontier model, the large developer must implement and retain a written “safety and security protocol,” which it must publish conspicuously and transmit to a state agency. The protocol consists of technical and organizational documents that focus principally on the risk of “critical harm.” Developers could be held responsible if their activities “made it substantially easier or more likely for” intervening human actors to inflict the harm at issue. The protocol must:
- Specify reasonable protections and procedures that reduce the risk of critical harm
- Describe reasonable cybersecurity protections that reduce the risk of unauthorized access or misuse leading to critical harm, whether by sophisticated or other actors
- Describe in detail the testing procedure to evaluate unreasonable risks of critical harm and to assess possible misuse, modification, or evasion of developer or user control
- State compliance requirements with enough detail to allow the developer or a third party to readily ascertain whether the protocol has been followed
- Describe how the developer will fulfill its obligations under this law, and
- Designate senior personnel responsible for ensuring compliance.
The protocol requirement is similar to TFAIA’s requirement that developers implement and publish a “frontier AI framework” documenting how they deal with “catastrophic risks” of their frontier models. Both laws also require covered developers to review this documentation annually and make modifications as needed.
While the elements of a RAISE Act protocol are not the same as what is mandated for a TFAIA framework, the descriptions appear close enough such that the content of one could be tailored to suffice for the other. One difference, however, is that only TFAIA requires developers to describe how they incorporate national and international standards and reflect “industry-consensus best practices.”
Critical harm
As for the key term “critical harm,” the RAISE Act says it “means the death or serious injury of one hundred or more people” or at least $1 billion of monetary or property damage “caused or materially enabled by” a frontier model, via 1) the creation or use of certain types of weapons or 2) a model that acts with limited human intervention, and acts in a way that would constitute a crime if performed by a human who was being at least grossly negligent in its commission.
In contrast, TFAIA uses the term “catastrophic risk,” which has a similar definition with a few differences – such as a lower death and injury threshold (more than 50 people will suffice), a restricted description of applicable human crimes (murder, assault, extortion, or theft), and a third possible risk: evading developer or user control. TFAIA also contains certain exclusions from the definition, such as when the risk comes from otherwise publicly accessible information.
Substantive obligations
While the RAISE Act, like TFAIA, is mostly about transparency, it currently includes two substantive obligations, one for developers to “[i]mplement appropriate safeguards to prevent unreasonable risk of critical harm” and the other prohibiting them from deploying frontier models that “would create an unreasonable risk of critical harm.” It appears that Governor Hochul reached agreement with state legislators on a future amendment that would remove the deployment prohibition, but it is unclear whether the obligation to implement safeguards is also on the chopping block. If it survives, it would be a notable step further than anything found in TFAIA or other state laws applicable to AI developers. It would also leave open interpretive questions about what safeguards are “appropriate” and what risk of critical harm is “unreasonable” for compliance and enforcement purposes.
Third-party audits
The RAISE Act requires large developers to “annually retain a third party to perform an independent audit of compliance with” the law’s requirements. The auditor is to use undefined “best practices,” be granted access to necessary materials to assess compliance, and produce a detailed report with recommendations. Large developers must “conspicuously publish” the report and provide a copy to a state agency.
It is unclear whether this requirement is to be excised in the legislative amendment process, as neither Governor Hochul’s office nor the legislators sponsoring the bill have referred to it one way or the other in public statements. If it survives, it will reflect another key difference from TFAIA – which does not require third-party audits – and any other state laws. Instead, TFAIA mandates that frontier developers publish transparency reports, with large frontier developers required to disclose “the extent to which third-party evaluators were involved” in assessing whether AI frameworks are compliant.
Safety incident disclosures
The last transparency-related obligation in the RAISE Act is for a large developer to disclose each safety incident affecting a frontier model to the state’s division of homeland security and emergency services within 72 hours of the developer learning of the incident or having a reasonable belief that it has occurred.
A “safety incident” is one that “provides demonstrable evidence of an increased risk of critical harm,” limited to four possibilities regarding a frontier model: 1) it autonomously engages in behavior not requested by a user, 2) the model weights are taken or accessed without authorization, inadvertently released, or maliciously misused, 3) the model has critical failure of any technical or administrative controls, or 4) the model has unauthorized use.
The comparable TFAIA provision requires frontier developers to report “critical safety incidents,” which has a similar aim to secure access and control of the model, but also includes the notion of a model deceiving a developer in order to evade control or monitoring. However, the main difference here is that, under TFAIA, covered developers have more time – 15 days – to make their reports (except for incidents that pose imminent risk of death or serious physical injury, which must be reported within 24 hours). TFAIA also does not require developers to report incidents they reasonably believe have occurred, but do not have confirmation of.
False or materially misleading statements
Large developers violate the RAISE Act if they “knowingly make false or materially misleading statements or omissions in or regarding documents produced pursuant to” the law. This provision has a slightly more circumscribed counterpart in TFAIA, which is limited to statements made regarding catastrophic risk or statements in the AI framework. The California provision also contains an exception for a statement that was “made in good faith and was reasonable under the circumstances.”
Whistleblower protections
Large developers are forbidden from preventing or retaliating against an employee who discloses or threatens to disclose information to the Attorney General that the employee reasonably believes involves developer activities that pose “an unreasonable or substantial risk of critical harm.” Developers are required to tell new employees about this protection and post notices about it. Employees harmed by violations of this section may seek judicial relief.
TFAIA has a comparable provision that extends coverage to disclosures 1) about any violations of that law, 2) made to federal authorities, or 3) relating not just to frontier models but the larger category of foundation models. On the other hand, TFAIA limits covered employees to those responsible for dealing with critical safety incidents.
How will the RAISE Act be enforced?
Governor Hochul’s office indicated that the RAISE Act would ultimately establish “an oversight office within the Department of Financial Services that will assess large frontier developers” and issue annual reports, although no reference to it appears in the signed version of the law.
The office of one of the bill’s legislative sponsors stated that this new office would be funded by developer fees that the office would determine itself, and that it would issue regulations. Such authority goes beyond TFAIA, which does obligate an existing state agency to issue annual safety reports but contains nothing about rules or fees.
The Attorney General is empowered under the RAISE Act to bring civil actions for violations. While the civil penalty amounts described in the signed version of the bill are much higher, Governor Hochul’s office indicates that penalties in an amended version of the law will be “up to $1 million for the first violation and up to $3 million for subsequent violations.” TFAIA also has a $1 million penalty cap for Attorney General actions, but it applies per violation and does not rise for developers with multiple violations.
The signed version of the RAISE Act also includes provisions that aim to stop developers from contractual or corporate maneuvers intended to transfer liability to other parties.
First, the law declares void any contractual provision by which a developer seeks either to avoid liability for violations or “shift liability” to others in exchange for use of or access to the developer’s goods or services. Second, to effectuate the RAISE Act’s intent, a court is directed to “disregard corporate formalities and impose joint and several liability on affiliated entities” if 1) the corporate structure was developed via purposeful and unreasonable steps to limit or avoid liability, and 2) that structure would indeed “frustrate recovery of penalties, damages, or injunctive relief.”
What are the larger takeaways?
The RAISE Act and TFAIA are signs pointing to an emerging national standard for developers training large and advanced AI models. This standard focuses more on transparency and reporting, and less on substantive prohibitions relating to how such models are built or deployed.
Even a single state law requiring public transparency has national significance, though, because public information in New York or California is also available in other states. Public knowledge about AI-related safety and harm may also provide more of an evidentiary foundation for policymakers to consider in future debates about substantive AI regulations.
Developers of AI models that may be covered under the RAISE Act or TFAIA are encouraged to take immediate steps to determine their specific compliance obligations, especially given that TFAIA’s effective date is fast approaching.
While the RAISE Act is still to be amended and not in effect for another year, it would be prudent for developers to approach compliance with an eye toward what protocols and documents will effectively comply with both laws.
See DLA Piper’s earlier client alert on TFAIA for a comparison on how its obligations – and by extension, the similar obligations in the RAISE Act – stand in comparison to the EU AI Act and other state laws in the US.
Find out more
DLA Piper’s team of AI lawyers, data scientists, and policy specialists helps organizations navigate the complex workings of their AI systems and comply with current and developing regulatory requirements. The firm continuously monitors updates and developments arising in AI and its impact on industry across the world.
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI strategy through our AI ChatRoom series.
For further information or if you have any questions, please contact any of the authors.


