
7 May 2026
Unpacking SB5: Connecticut’s new AI law
Companion bots, synthetic media, automated employment decision tools, and moreStates continue to debate and pass artificial intelligence (AI) laws at a rapid pace. The latest major statute poised to hit the books is Connecticut's bipartisan SB5, which the state legislature passed on May 1, 2026, and which Governor Ned Lamont is reportedly set to sign. The 67-page law is not a broad governance statute, but a set of separate AI bills linked together.
In this alert, we unpack SB5 and compare it to similar laws passed in other states.
AI companions: The seventh state companion bot law and a potential ban on providing chatbots to children
Connecticut will join New York, California, Washington, Oregon, Idaho, and Iowa as states with laws addressing potentially harmful interactions with conversational chatbots.
Effective January 1, 2027, Connecticut’s law employs a broad definition of “AI companion,” covering any AI model that “(i) communicates with individuals in natural language, and (ii) simulates human conversation and interaction through text, audio or video.” The only state laws with an equivalent definition are Idaho S1297 and Iowa SF2417, whereas the other state laws restrict coverage to chatbots that not only exhibit anthropomorphic features but also sustain a relationship across multiple interactions. All of the companion bot laws have various exclusions for chatbots used for business purposes.
Similar to provisions in the other companion bot laws, Connecticut’s version requires providers to 1) include a protocol to take reasonable efforts to detect expressions of suicide, self-harm, or imminent violence, and refer the user to appropriate mental health evaluation and treatment resources, and 2) have initial and hourly clear and conspicuous notices that the bot is not a human.
While those requirements apply to all users, the law adds a prohibition on providing AI companions to users under 18 “if it is reasonably foreseeable that” the companion “is capable of” any of the following:
“(A) Encouraging the user to engage in self-harm, suicidal ideation, violence, disordered eating or the unlawful consumption of alcohol or drugs;
(B) Offering mental health services to the user…;
(C) Discouraging the user from seeking (i) mental health services from a licensed mental health professional, or (ii) assistance from an appropriate adult;
(D) Encouraging the user to harm others or engage in any illegal activity;
(E) Engaging in any romantic, erotic or sexually explicit interaction with the user;
(F) Prioritizing validation of the user's beliefs, preferences or desires over factual accuracy or the user's safety;
(G) Implementing a system of rewards or affirmations for the user based on a variable ratio or variable interval reinforcement schedule for the purpose of maximizing the user's engagement time with such artificial intelligence companion; or
(H) Optimizing user engagement “in any manner that supersedes the prohibitions in (A) through (G), inclusive, of this subdivision.”
Subsection (B) on offering mental health services to the user comes with an exception for a companion that a licensed mental health professional is using with a patient as part of a treatment plan, where the companion was developed with “robust, independent, peer-reviewed clinical trial data demonstrating [its] safety and efficacy” in treating specific conditions and populations, and where the developers and the licensed professional are following certain protocols involving transparency, disclosure, supervision, and accountability. In the mental health context, the references to clinical testing, safety, and efficacy reflect the Federal Drug Administration’s federal mandate, which may bear on federal preemption debates.
Subsection (F) appears to address prominent concerns of “sycophancy” or unchecked reinforcement of user inputs. However, general references to “beliefs” and “factual accuracy” may result in First Amendment challenges and claims of political bias, depending on whether and how the state attempts to enforce this part of the law.
Some of the other subsections mirror elements in other companion bot laws. For example, subsections (G) and (H), addressing maximizing or optimizing user engagement, follow the path of the aforementioned Idaho and Iowa laws, as well as the recently passed companion bot laws in Washington and Oregon, which also address the topic. The fact that states are now passing laws dealing with children’s engagement with AI tools reflects that legislators are being much more proactive, much earlier, than with similar concerns in the social media context.
For child-specific provisions, bot providers have a safe harbor if they “reasonably determined” that the user was at least 18 years old.
The Attorney General (AG) may file suit under any of the companion bot provisions and seek civil penalties. Aggrieved users – or parents of minor aggrieved users – may sue within 3 years of a violation to recover actual and punitive damages, along with attorneys’ fees.
The Connecticut law may be the most restrictive companion bot regulation in the US, given its extensive child-specific prohibitions, which are also similar to California bill AB1064, which Governor Gavin Newsom vetoed last year. As discussed in a prior client alert, Governor Newsom’s veto memo stated that the bill’s prohibitions were so broad that they might effectively ban children’s use of conversational bots in the state. The vetoed bill had a much narrower definition of “companion chatbot” than the Connecticut law, which appears to sweep in every general-purpose chatbot available to consumers. Especially given the challenges of building guardrails into large language model-based systems to prevent certain outputs, the Connecticut law may function as a ban on providing chatbots to individuals under 18 in Connecticut.
Synthetic media transparency
The new Connecticut law also imposes transparency obligations on developers of AI systems or models capable of generating “synthetic digital content,” including AI-generated audio, images, text, or video. By October 1, 2027, such developers must ensure that such content is marked and detectable as such.
The law acknowledges that downstream users may circumvent such measures, directing developers to consider “recognized technical standards” and ensure that their solutions are “effective, interoperable, robust and reliable, considering (A) the specificities and limitations of different types of synthetic digital content, (B) the implementation costs, and (C) the generally acknowledged state of the art,” but only “as far as technically feasible.”
The synthetic media provisions contain several limitations and exceptions. One is for artistic and satirical content in audio, image, or video format, where the transparency requirement is “limited to a disclosure that does not hinder the display or enjoyment of such work or program.” Even with this limitation, this provision could be challenged on First Amendment grounds by creators of such content who argue that any disclosure at all would create a hindrance.
Exceptions include text-only content that “is published to inform the public on any matter of public interest” or “is unlikely to mislead a reasonable person consuming” the content. Other exceptions are for a model that merely “performs an assistive function for standard editing,” “does not substantially alter the input data provided by the developer or the semantics thereof,” or “is used to detect, prevent, investigate or prosecute any crime where authorized by law.”
The synthetic media provisions, which do not specify who enforces them or how, are similar to those in the California AI Transparency Act, as discussed in a prior client alert.
Automated employment tools
Several provisions of the new law address any “automated employment-related decision process” (AEDP), which, at its core, is a computational process that generates any output that “(i) affects the outcome of an employment-related decision, and (ii) is not a de minimis factor that is relied upon in making, or in determining the material terms of, an employment-related decision.” The law provides various examples of outputs that are or are not covered. The operative provisions take effect on October 1, 2027.
Deployers of an AEDP that is intended to interact with an employee or applicant for employment in the state must ensure that it is disclosed and described in plain language to each such person, unless “a reasonable person would deem it obvious” that the person is interacting with an AEDP.
The following provisions apply to deployers who use an AEDP that generates any output “for the purpose of making, or as a substantial factor in making, an employment-related decision concerning an employee or applicant for employment in the state.” (Note the higher standard reflected in the use of “substantial factor” here, versus the AEDP definition’s coverage of any such output that is more than a “de minimis factor”). For such uses, before any such employment-related decision is made, deployers must:
- Provide to such employee or applicant a written notice disclosing the AEDP’s use, its purpose, the nature of the decision, and the right to opt out of personal data processing per state law; and
- If such employment-related decision “is adverse” to such employees or applicants, provide them with a high-level statement disclosing the principal reason(s) for the decision, including “the degree to which, and manner in which,” the AEDP output contributed to the decision, the type and source of data processed by the AEDP, and rights to examine and correct any data they did not provide to the deployer.
Violations of the AEDP provisions are treated as unfair or deceptive acts or practices under Connecticut’s consumer protection law. The AG, who is the sole enforcer, may give a deployer 60 days to cure violations and avoid suit.
Another section of SB5, effective on October 1, 2026, amends the state employment discrimination law to cover the use of an AEDP that has a discriminatory effect. In a case under that law, the relevant state body or court shall “consider any evidence, or lack of evidence, of anti-bias testing or similar proactive efforts to avoid such discriminatory practice, including, but not limited to, the quality, efficacy, recency and scope of such testing or efforts, the results of such testing or efforts and the response thereto.”
The AEDP provisions are similar to those found across 2025 amendments to California Consumer Privacy Act regulations, which take effect on January 1, 2027; the Colorado AI Act, which takes effect on June 20, 2026, but is likely to be amended before then; 2024 amendments to the Illinois Human Rights Law; and New York City Local Law 144.
Frontier model whistleblowers
A provision in SB5 provides whistleblower protection for certain employees of frontier model developers who disclose potential catastrophic risks. These protections link to the state’s existing employee whistleblower statute and add specific protocols for developers to implement for internal disclosures of such risks.
The state’s Commissioner for Consumer Protection – not the AG – may file suit against violators for civil penalties and other relief.
As discussed in a prior client alert, California’s recently enacted Transparency in Frontier Artificial Intelligence Act (TFAIA) contains a similar whistleblower provision. Connecticut’s provision is stand-alone and is not paired with a broad transparency law applicable to frontier model developers, as in TFAIA or New York’s Responsible AI Safety and Education (RAISE) Act. This part of SB5 is thus a mirror image of the RAISE Act, which originally included – and then removed – a whistleblower provision.
Other provisions
One section of SB5 establishes a state AI regulatory sandbox program, which aligns Connecticut with such programs enabled by the Texas Responsible Artificial Intelligence Governance Act and the Utah AI Policy Act.
Other sections of Connecticut’s new law address state-specific programs on topics such as educational and workforce training opportunities.
Conclusion
Connecticut has now emerged alongside California and certain other states as a jurisdiction that will act on concerns about the uses of automated technology, even as the federal preemption debate continues. With a large number of AI-related bills pending in many statehouses, increased regulations are expected to surface on the topics addressed in SB5.
Given the accelerating pace of state legislative developments, companies are encouraged to continue seeking effective AI governance and compliance approaches that enable efficient incorporation of new legal requirements as they come into effect across the country.
Find out more
DLA Piper’s team of AI lawyers, data scientists, and policy professionals helps organizations navigate the complex workings of their AI systems and comply with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impact on industry across the world.
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
For further information or if you have any questions, please contact any of the authors.


