
15 September 2025 • 10 minute read
AI companion bots: Top points from recent FTC and government actions
In a recent client alert, we explored the legislative and enforcement outlook for chatbots designed or marketed for mental health or companionship. Among other things, we noted that Commissioner Melissa Holyoak of the Federal Trade Commission (FTC) had called for the agency to do a market study of companion bots to understand how they may impact children’s mental health. Since our client alert, federal and state government interest – and media attention – has increased further.
On September 11, the FTC issued “Section 6(b) orders” to seven companies that operate consumer-facing, generative AI “companion” chatbots, seeking detailed information on product advertising, safety practices, monetization, usage and engagement, character design and approval, testing and monitoring for negative impacts, age-based access restrictions, complaint handling, and compliance with company rules and terms. The Commission voted 3–0 to issue the orders.
In this client alert, we explore this new action and note other recent state developments.
What is a Section 6(b) market study?
The Commission uses Section 6(b) of the FTC Act to conduct market studies in which agency staff engage in in-depth research on a commercial topic of interest in the marketplace. This provision allows the agency to issue a set of “6(b) orders,” which are official demands for recipients to provide specified documents and information. For a given market study, the agency usually limits the number of recipients to a relatively small number, often between five and ten firms.
These orders can be extensive and are both enforceable and contestable. Much like with the agency’s civil investigative demands, the agency can go to federal court if a recipient fails to comply, and recipients can file a petition with the FTC to limit or quash the order. The 6(b) process is thus more formal than a voluntary Request for Information.
Notably, a 6(b) study is not a law enforcement investigation, and the information that recipients provide the agency is not used for law enforcement purposes. However, as we noted in our prior client alert, these studies can inform the direction of later enforcement actions for violations of the FTC Act’s prohibition on deceptive or unfair conduct. They could also serve to spur more legislative and regulatory action.
The outcome of the Section 6(b) process is usually a staff report, often with recommendations to policymakers and companies. Recent examples are the staff reports issued pursuant to 6(b) studies of large AI partnerships and investments and the data practices of large social media and video streaming companies. The latter report included a description of the companies’ use of AI and data analytics.
In practice, it can take considerable time – measured often in years – for agency staff to obtain and review the requested information, develop a report, and get FTC approval to release the report to the public.
What is the FTC seeking?
The FTC’s inquiry has a broad scope and seeks detailed information across several substantive areas related to the development and operation of AI companion chatbots. It requires companies to provide granular data on product features, user engagement, and internal governance. Specifically, the agency is requesting:
- Product features and functionality: Descriptions of chatbot capabilities, how users interact with AI companions, and the range of available characters or personas, including how these are designed, categorized, and approved.
- Advertising and marketing: Information on how chatbots are promoted, the claims made about their capabilities, and the disclosures provided to users regarding intended use, limitations, and potential risks.
- Monetization and engagement: Details on how companies monetize user engagement, including subscription models, in-app purchases, and advertising, as well as strategies and features designed to increase user engagement, session frequency, and duration (especially among children and teens).
- Age-based access and restrictions: Explanations of how age restrictions are implemented, monitored, and enforced, including any age-gating, verification, or parental controls, and how content or features are tailored for different age groups.
- Safety testing and monitoring: Descriptions of pre-deployment and ongoing testing, monitoring, and mitigation measures to identify and address potential negative impacts, such as exposure to inappropriate content or excessive use, with a focus on protections for minors.
- Character and content moderation: Processes for developing, reviewing, and approving AI companion characters, including how sexually themed or otherwise sensitive content is managed, and how the company responds to emerging risks or complaints about specific characters or interactions.
- Complaint handling and user feedback: Procedures for receiving, categorizing, and responding to user complaints or reports of harm, including escalation protocols and how feedback is used to improve safety and compliance.
The FTC’s requests also cover both pre-deployment and post-deployment practices, with a particular focus on how these products may affect children and teens.
Why does it matter?
The FTC’s inquiry signals a significant increase in regulatory scrutiny of AI companion chatbots, especially those that interact with children and teens. By seeking detailed information on how these products are developed, marketed, and monitored for safety, the FTC is laying the groundwork for potential future policy or enforcement actions. Companies in this space should be aware that the agency is particularly concerned about the risks of inappropriate content, excessive engagement, and the adequacy of age-based restrictions. As mentioned above, the FTC’s findings may inform future initiatives, as FTC Commissioner Mark Meador made clear in his accompanying statement:
The study the Commission authorizes today, while not undertaken in service of a specific law enforcement purpose, will help the Commission better understand the fast-moving technological environment surrounding chatbots and inform policymakers confronting similar challenges. The need for such understanding will only grow with time. For all their uncanny ability to simulate human cognition, these chatbots are products like any other, and those who make them available have a responsibility to comply with the consumer protection laws. Undertaking this study is therefore essential. If the facts—as developed through subsequent and appropriately targeted law enforcement inquiries, if warranted—indicate that the law has been violated, the Commission should not hesitate to act to protect the most vulnerable among us.
Other recent regulatory updates
As explored in our earlier client alert, the FTC’s inquiry is part of a broader trend of increasing regulatory scrutiny of AI companion chatbots. In addition to the Commission’s action, states are continuing to address the risks of these products, particularly for children and vulnerable users.
California SB 243
On the same day as the FTC’s announcement, the California legislature passed SB 243 with bipartisan support, and sent the bill to California Governor Gavin Newsom for approval. If signed, the bill would make California the second state (after New York) to regulate companion bots, and the first state to require their operators to implement specific safety protocols for AI companions, with the law taking effect January 1, 2026.
SB 243 targets AI systems that provide adaptive, human-like responses and are capable of meeting a user’s social needs. The bill would require platforms to implement protocols to prevent chatbots from engaging in conversations about suicidal ideation, self-harm, or sexually explicit content with users, especially minors. It also mandates recurring alerts to users – every three hours for minors – reminding them that they are interacting with an AI, not a real person, and encouraging them to take breaks. Additional requirements include annual reporting to the Office of Suicide Prevention, transparency around safety protocols, and regular third-party audits. The bill would also allow individuals harmed by violations to seek injunctive relief and damages.
Other regulatory actions on chatbot interactions
State enforcers have also been active in recent weeks. For example, on August 25, via the National Association of Attorneys General, 44 attorneys general sent a strongly worded letter to 13 platforms and AI companies, expressing concern about chatbot interactions with children and telling the recipients that they would “be held accountable for their decisions.”
Combined with the recently enacted state laws in Illinois, Nevada, and Utah involving AI and mental health, all discussed in our earlier client alert, this activity reflects growing concern among lawmakers and regulators about the health and safety risks posed by AI companions. High-profile private lawsuits are being filed as well. The result is a rapidly evolving and increasingly complex regulatory and litigation environment for companies operating in this space, with heightened expectations for transparency, safety, and risk mitigation.
Key takeaways
Companies developing or operating AI companion chatbots are encouraged to assess their compliance posture considering the FTC’s inquiry. This may include reviewing product features, advertising and engagement strategies, and internal governance practices.
Companies may also prepare to provide detailed documentation and data to regulators, and to demonstrate robust processes for monitoring and mitigating potential risks to minors. The FTC’s focus on this sector underscores the importance of proactive risk management and transparency as regulatory expectations continue to evolve.
Read more
To learn more about our work testing and red teaming generative AI chatbots for safety and other legal risks, see:
- Legal red teaming, one year in: Reports from the field
- Addressing Legal risks in GenAI: The importance of legal red teaming
- Legal red teaming: A systematic approach to assessing legal risk of generative AI models
To learn more about our work defending litigation and investigations over chatbots and AI, see:
- AI disputes webinar series
- AI Disputes Landscape Resources
- New Sheriff in Town: State AG enforcement on the use of AI
To read about our addition of long-time AI thought leader at the FTC, Michael Atleson, to our AI legal team, see:
To learn more about our work on AI chatbot safety in medical and other regulated spaces, see:
Find out more
DLA Piper’s team of AI lawyers, data scientists, and policy experts helps organizations navigate the complex workings of their AI systems and comply with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impact on industry across the world.
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI strategy through our AI ChatRoom series.
For further information or if you have any questions, please contact any of the authors.

.jpg?h=975&iar=0&w=2560)
