bright colored LED wall

21 January 2026

AI disclosure laws on commercial chatbot interactions are on the rise: Key takeaways for companies

A growing number of state laws require companies deploying chatbots for one-to-one consumer interactions to disclose that the consumers are not communicating with humans.

The laws all vary, however, in terms of (1) the commercial context in which they apply and (2) the details of when and how a disclosure must be made.

In addition, a first-of-its-kind law in New York requires advertisers to disclose the use of “synthetic performers” in traditional or other static advertising.

With state legislatures considering a large number of artificial intelligence (AI)-related bills, this variable landscape of AI disclosure laws is set to become even more crowded and tougher to navigate over time.

Further, as these new state laws come into effect, the Federal Trade Commission (FTC) Act remains constant and will often require companies to disclose AI use in order to avoid consumer deception.

The federal standard is limited to the commercial context (e.g., the sale of goods or services) and would apply only if the chatbot’s presence is unexpected and material – that is, likely to affect a consumer's choice of or conduct regarding a product. State consumer protection laws barring deceptive commercial practices would apply in the same way. This baseline standard for disclosure will remain, regardless of whether AI-specific state laws apply or remain on the books in the wake of debates on federal preemption.

That said, existing, AI-specific state disclosure laws encompass requirements that are both narrower and broader than the general deception standard. Some apply in more limited circumstances and/or apply without regard to consumer expectation or materiality.

A Venn diagram of these state laws and the general deception standard would be a mess of concentric circles. Except for the New York law, though, the state laws all appear to apply solely to one-to-one interactions between chatbots and consumers, rather than static advertisements distributed via traditional channels.

State AI consumer disclosure laws

Below we set out some of the basics and variations of current state AI disclosure laws involving consumer interactions, roughly from broader to narrower coverage.

Maine

Any person using a bot or other computer technology to engage in trade or commerce with a consumer must make a clear and conspicuous disclosure that the consumer is not engaging with a human, if use of the bot may mislead consumers into thinking it is a human. Note that plaintiffs are not required to prove that any consumers were actually misled or suffered any injuries as a result of a violation.

New Jersey

Any person using a bot to communicate or interact with a person involving the sale or advertising of merchandise or real estate must disclose it clearly and conspicuously and at the start of the interaction. The law thus applies only to sales or advertising, rather than all trade and commerce, and is limited to merchandise and real estate.

California

Any person using a bot to communicate or interact with another person in a commercial transaction with intent to mislead the other person about its artificial identity, for the purpose of knowingly deceiving the person about the content and incentivizing the transaction, must make a clear and conspicuous disclosure. This law is transaction-based, and thus narrower in scope than the Maine and New Jersey laws. It is also narrower because it requires not only a specific intent to mislead but also an intent to mislead for a particular purpose and result – facts that may often be hard to prove.

Colorado

Per the Colorado AI Act, a deployer of a high-risk AI system must disclose its use for any high-risk consumer interactions, unless it is obvious to a reasonable person that the interaction is indeed with an AI system. Such interactions are those involving “consequential decisions" regarding education, employment, finance, essential government services, healthcare, housing, insurance, or legal services. The law doesn’t specify when and how the disclosure must be made.

Utah

Utah has two relevant laws that each feature various parts of the laws mentioned above. First, a seller using generative AI to interact with individuals in consumer transactions must disclose the use of AI if the consumer asks, unless the seller has already disclosed its use clearly and conspicuously. Second, state-regulated professionals using generative AI in high-risk interactions with individuals receiving professional services must make a disclosure prominently and at the start of the interaction.

Noncommercial state AI disclosure laws

As discussed in prior DLA Piper client alerts, a few states have also imposed AI-related consumer disclosure requirements outside the context of goods and services. In particular, New York and California have laws requiring certain disclosures for companion chatbots, and Utah has a law requiring disclosures for mental health chatbots. These laws stem less from concerns about commercial deception and more from concerns of other potential harm arising from interactions with such services.

New York synthetic performer disclosure law

Finally, New York’s groundbreaking law, S.8420-A/A.8887-B, requires commercial advertisers to disclose conspicuously to consumers when a “synthetic performer” is used in a visual or audiovisual advertisement.

The definition of "synthetic performer" refers to “a digitally created asset created, reproduced, or modified by computer, using generative artificial intelligence or a software algorithm, that is intended to create the impression that the asset is engaging in an audiovisual and/or visual performance of a human performer who is not recognizable as any identifiable natural performer.”

The law specifically excludes “audio advertisements” and, while not explicitly excluded, does not appear to cover the use of real performers enhanced by AI tools. It also would not cover deepfakes of real performers the way that Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act does. Instead, the law appears limited to situations in which the advertiser’s intent is to have people think that the digital asset is just a random actor (e.g., not an identifiable celebrity) performing in the advertisement.

Despite its limited application to certain types of ads, generated content, and advertiser intent, S.8420-A/A.8887-B is the only law in the United States that specifically requires a disclosure for the use of AI-generated content in advertisements.

Further, when the law does apply, it does so beyond the general deception standard in the FTC Act and state consumer protection laws. For the latter, the relevant enforcement agency would have to show that, via the use of the synthetic performer, the advertiser is making a material misrepresentation that is likely to mislead reasonable consumers. The New York law dispenses with the need to make that showing. On the other hand, it adds an intent requirement that consumer protection enforcers typically do not have to meet when holding someone responsible for deceptive commercial conduct.

Takeaways

More state laws requiring AI-related disclosures are likely on the way. If past is prologue, new laws in this area will likely contain elements of, but not be identical to, those that other states have already passed. Meanwhile, the tug of war between state and federal AI regulation will continue, and state attorneys general may use broader consumer protection laws in AI-related advertising contexts.

Especially under these state-specific and fact-dependent circumstances, and until the dust settles, advertisers are encouraged to err on the side of transparency, letting consumers know when they’re talking to a chatbot or seeing a human-like avatar in an advertisement. Advertisers are also encouraged to ensure that consumers will see and understand any such disclosure.

Even when the bot-or-not question is not at issue, advertisers should remain vigilant about whether other uses of AI, such as when it is used to depict product use and results, may deceive consumers.

For more information, please contact the authors. 

Print