Add a bookmark to get started

20 February 202410 minute read

California’s SB-1047: Understanding the Safe and Secure Innovation for Frontier Artificial Intelligence Act

On February 7, 2024, Senator Scott Wiener introduced Senate Bill 1047 (SB-1047) – known as the Known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (the Act) – into the California State Legislature. Aiming to regulate the development and use of advanced artificial intelligence (AI) models, the Act mandates developers to make certain safety determinations before training AI models, comply with various safety requirements, and report AI safety incidents. It further establishes the Frontier Model Division within the Department of Technology for oversight of these AI models and introduces civil penalties for violations of the Act.

In this alert, we describe the current legal landscape related to AI and how SB-1047 may impact organizations that develop AI systems.

The AI regulation landscape

SB-1047 is one of the many AI-related bills currently active in state legislatures across the US, as state lawmakers remain focused on setting the initial rules for AI systems. Much of the regulative activity stems from the potential harms that may be caused by the proliferation of highly advanced AI systems, such as generative AI. Most of the 190 AI-related bills released in 2023 were related to deepfakes, generative AI systems, and the use of AI in the employment context.

Further, in 2023, both President Joe Biden and California’s Governor Gavin Newsom signed Executive Orders related to AI. These two Executive Orders expanded the glut of legal directives surrounding the use and deployment of AI.

  • On September 6, 2023, Governor Newsom signed Executive Order N-12-13. This Executive Order emphasized California’s role in the development of generative AI and announced key directives aimed at understanding the risks posed by the emerging technology, ensuring equitable outcomes when it is used, and preparing the state government workforce for its use.

  • Following Governor Newsom’s Executive Order, President Biden signed the Executive Order on Safe, Secure and Trustworthy Artificial Intelligence (White House Executive Order) on October 30, 2023. This Executive Order was a sweeping mandate which aimed to advance a coordinated federal government-wide approach to the safe and responsible development of AI. The Order provides mandates to many of the primary administrative agencies, spanning topics from the government’s use of AI to requirements related to the testing and development of AI systems.

The regulation of AI is also a top priority internationally. On December 9, 2023, the European Union reached a provisional agreement on the Artificial Intelligence Act (AIA). The AIA, which was proposed by the European Commission in April 2021 and now must be formally adopted to become European Union law, is the first comprehensive law on AI by a major regulator.

Much like the laws and regs above, legislative and executive actions, the AIA is focused on the safe deployment and use of AI. As such, it places much of its obligations on AI developers that place high-risk AI systems into service in the European Union.

SB-1047 requirements

SB-1047 introduces a number of requirements aimed at establishing safety standards for the development of large-scale AI systems. The Act’s major requirements are outlined below.

  • AI systems covered by the Act: Not all AI systems are scrutinized equally under the Act. The Act defines “artificial intelligence models” as machine-based systems that can make predictions, recommendations, or decisions influencing real or virtual environments and which can formulate options for information or action. However, the Act does not emphasize AI models generally – rather, it focuses specifically on AI models that it defines as a “covered AI model.” These covered models are those that meet one or both of the following requirements: (1) the model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations or (2) the model has similar performance to that of a state-of-the-art foundation model.

  • Safety assessment requirement: Under the Act, developers would be required to assess safety before training a “covered AI model,” ensuring that the models are safe and do not pose a risk to public safety or welfare. This involves making a positive safety determination that the model does not have the capability to enable certain harms, such as the creation of chemical or biological weapons that would result in mass casualties or a cyberattack on critical infrastructure resulting in $500 million or more in damage.

  • Third-party model testing: Developers of covered AI models are required to implement a written safety and security protocol which provides sufficient detail for third parties to test the safety of the models they develop.

  • Shutdown capability: Under the Act, developers are required to implement a shutdown capability for AI models when such models have not obtained a positive safety determination. This means that developers must be able to deactivate or restrict the AI model's functionalities until they are able to obtain such a determination.

  • Annual compliance certification: Developers of covered models which are not the subject of a positive safety determination would be required to submit an annual certification, signed by a senior officer within the organization, stating that they have complied with the requirements of the Act.

  • Safety incident reporting: The Act would require mandatory reporting of AI safety incidents to the Frontier Model Division,[1] a division seated within the California Department of Technology and established by the Act. This reporting requirement would compel the developer to inform the Frontier Model Division within 72 hours of an “artificial intelligence safety incident.”[2]

  • Whistleblower protections: The Act incorporates provisions to protect and encourage whistleblowing within AI development entities to ensure that employees can report non-compliance with the Act without fear of retaliation.

  • Policies for computing clusters: Under the Act, organizations operating computing clusters must establish policies to obtain identifying information from a prospective customer that would utilize resources sufficient to train a covered AI model.[3] The organization operating the computing cluster is also required to validate the accuracy of this information at least annually.

  • Penalties for non-compliance: The Act proposes penalties for organizations that do not comply with its requirements. Entities that violate the Act may face injunctions, fines, and other penalties.

Key takeaways

  • California is keen on AI regulation: California is among the many US states with multiple proposed and pending legislative acts related to AI. The state has proposed AI laws addressing issues spanning from healthcare discrimination to the mental health impacts of social media. California’s legislature is focused on setting boundaries for AI use and development in the state.

  • High threshold for regulation: The Act fits in conceptually with the AIA as both regulations focus their requirements on a subset of AI systems viewed as most likely to cause harm. However, the way these two regulations define those systems is significantly different. The threshold for regulation under the Act is extremely high, focusing on only the most sophisticated AI systems or those requiring computing power for training, the likes of which we have not seen among the current generation’s foundation models. In contrast, the AIA reserves its most significant oversight for systems with the highest probability of causing harm while also taking into account the severity of that harm.

  • AI safety is top of mind: Like many of the laws proposed by state legislatures, SB-1047 puts the avoidance of harms produced by AI at center stage. Organizations developing AI in the state should consider the effects of the technology they develop early in the development lifecycle to best ensure compliance with emerging laws and regulations.

  • Developers will be accountable: SB-1047 holds organizations developing highly sophisticated AI systems accountable for the effects of the technology they create. Not only does the proposed law require reporting for some companies, but it also has proposed penalties for non-compliance.

  • Developers must test their AI systems: The Act requires developers to test their systems for harms and indicates the importance of third-party testing of such systems. Organizations developing AI systems should understand that laws and regulations around the world are continually integrating testing standards for AI to ensure the minimization of harms.

DLA Piper is here to help

DLA Piper’s team of lawyers and data scientists assist organizations in navigating the complex workings of their AI systems to ensure compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impacts on industry across the world. Moreover, DLA Piper has significant experience helping developers, deployers, and adopters of AI navigate the emerging global legal and regulatory landscape and test their AI systems for harms.

As part of the Financial Times’s 2023 North America Innovative Lawyer awards, DLA Piper was conferred the Innovative Lawyers in Technology award for its AI and Data Analytics practice.

DLA Piper’s AI policy team in Washington, DC is led by the Founding Director of the Senate Artificial Intelligence Caucus.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI ChatRoom series.

For further information or if you have any questions, please contact any of the authors.



[1] The Frontier Model Division would generally be tasked with overseeing AI model safety and compliance.

[2] The Act defines an “artificial intelligence safety incident” as any of the following: (1) A covered model autonomously engaging in a sustained sequence of unsafe behavior other than at the request of a user; (2) Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a covered model; (3) The critical failure of technical or administrative controls, including controls limiting the ability to modify a covered model, designed to limit access to a hazardous capability of a covered model; or (4) Unauthorized use of the hazardous capability of a covered model.

[3] The Act defines a “computing cluster” as: a set of machines transitively connected by data center networking of over 100 gigabits that has a theoretical maximum computing capacity of 10^20 integer or floating-point operations per second for training artificial intelligence.

Print