
14 October 2025
California delivers a new batch of AI-related legislation: Top points
In the absence of federal legislation on artificial intelligence (AI), states have been enacting their own laws – and no state has been more active than California. For example, our recent client alert details new AI developer obligations found in the Transparency in Frontier Artificial Intelligence Act (TFAIA).
The California state legislature recently passed many AI-related bills, which awaited Governor Gavin Newsom’s signature or veto by October 13. As of October 14, Governor Newsom has signed several of those bills. Unless otherwise stated, these new laws are all effective on January 1, 2026.
In this alert, we summarize the new California AI laws, along with key considerations for their implementation.
AB489
Signed on October 12, AB489 prohibits developers and deployers of AI technology from using specified terms, letters, or phrases in the advertising or functionality of that technology to falsely indicate (a) possession of a healthcare license or certificate, or (b) that advice, care, reports, or assessments from such technology are being provided by a person with such a license or certificate.
AB489 extends the existing state law that forbids individuals from making similar false claims that they are authorized to practice as a specified healthcare professional. Healthcare facilities in California that use generative AI to communicate with patients are also required to include disclaimers about its use.
SB243
Governor Newsom signed SB243, a highly anticipated bill on companion bots, on October 13. As detailed in a prior client alert, California is the second state, after New York, to regulate companion bots, and the first state to require their operators to implement specific safety protocols.
SB243 requires platforms to implement protocols to prevent chatbots from engaging in conversations about suicidal ideation, self-harm, or sexually explicit content with users, especially minors. It also mandates recurring alerts to users – every three hours for minors – reminding them that they are interacting with an AI, not a real person, and encouraging them to take breaks. Additional requirements include annual reporting to the Office of Suicide Prevention, transparency around safety protocols, and regular third-party audits. The bill would also allow individuals harmed by violations to seek injunctive relief and damages.
Later on October 13, Governor Newsom vetoed AB1064, the Leading Ethical AI Development (LEAD) for Kids Act, which would have prohibited making a companion chatbot available to a child unless it is not foreseeably capable of engaging in specified conduct that could harm children.
Governor Newsom’s veto memo stated that the bill’s prohibitions were too broad in that they might lead to an effective ban on children’s use of conversational bots in the state. He pledged to work with legislators to build on SB243 in 2026.
California AI Transparency Act (AB853)
AB853, also signed on October 13, updates a 2024 law that requires generative AI developers to ensure that content created using their tools includes provenance data that can be readily accessed through AI detection tools. “Provenance data” refers to “data that is embedded into digital content, or that is included in the digital content’s metadata, for the purpose of verifying the digital content’s authenticity, origin, or history of modification.” Set to take effect on January 1, 2026, the amended law extends the deadline to August 2, 2026.
The amended version also requires "large online platforms," as defined, to develop a way for users to easily access provenance data of uploaded content. That requirement takes effect on January 1, 2027.
Further, starting January 1, 2028, AB853 will require that "capture device manufacturers" include features on their products that enable users to include provenance data in the content that they capture. These devices include most mobile phones, cameras, and voice recorders.
AB621
On October 13, Governor Newsom also signed AB621, extending existing state law on nonconsensual deepfakes. Among other things, AB261adds liability for “a person that provides a service that enables the ongoing operation of a deepfake pornography service.” Such persons are liable only if they fail to stop providing that service within 30 days after receiving evidence that they are enabling such an operation. As framed in the bill, such liability is merely an explicit application of the law’s existing coverage of anyone who “knowingly facilitates” or “recklessly aids or abets” the creation or intentional disclosure of such content. However, the use of the undefined term “enables” creates doubt as to what services do or do not fall within the scope of this new language.
AB316
Also signed on October 13, AB316 applies to civil actions in which plaintiffs have claimed that they have been harmed by an AI system that the defendant had “developed, modified, or used.” In such actions, defendants will be barred from asserting a defense that the AI “autonomously caused the harm.” AB316 still allows such defendants to raise other affirmative defenses “including evidence relevant to causation or foreseeability,” or to present other evidence relevant to “comparative fault.” It is not clear how courts will interpret this bar in practice.
Preventing Algorithmic Price Fixing Act (AB325)
Signed on October 6, AB325 makes it “unlawful for a person to use or distribute a common pricing algorithm” in either of two ways: (1) “as part of a contract, combination in the form of a trust, or conspiracy to restrain trade or commerce,” or (2) “if the person coerces another person to set or adopt a recommended price or commercial term recommended by the common pricing algorithm for the same or similar products or services.” See our earlier client alert on unsuccessful congressional attempts to regulate pricing algorithms.
Conclusion
Although California’s legislative activity on AI will slow until the next legislative session, proposals continue to advance in other states. For example, in June 2025, the New York legislature passed the Responsible AI Safety and Education Act (RAISE Act), a major piece of legislation that, similar to the TFAIA, would impose transparency requirements on developers of frontier AI models. Governor Kathy Hochul has until the end of 2025 to approve or reject it.
Given the pace of state legislative developments, companies are seeking effective AI governance and compliance approaches that allow for efficient incorporation of new legal requirements as they come into effect across the country.
Find out more
DLA Piper’s team of AI lawyers, data scientists, and policy specialists helps organizations navigate the complex workings of their AI systems and comply with current and developing regulatory requirements. Recognized by Chambers Global as a 2025 Global Market Leader in Artificial Intelligence and named by BTI Consulting Group as a 2026 GenAI Litigation Powerhouse, DLA Piper is trusted globally for its leadership and innovation in the field. We continuously monitor updates and developments arising in AI and its impact on industry across the world.
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI strategy through our AI ChatRoom series.
For further information or if you have any questions, please contact any of the authors.


