26 December 2025

FTC issues consumer review warning letters and vacates AI consent order: Key points

On December 22, 2025, the Federal Trade Commission (FTC) announced two actions relating to fake or false consumer reviews:

  1. Issuance of a set of warning letters to companies that may be responsible for fake or false reviews
  2. An order finding that an artificial intelligence (AI) company is not responsible for any such reviews, despite its earlier settlement with the agency

These two matters, along with other recent activity, provide insights into how the FTC may use its enforcement resources regarding consumer reviews and AI.

1. FTC warning letters on consumer reviews

The FTC issued ten warning letters to companies regarding possible violations of the Consumer Review Rule, which took effect in 2024. The Consumer Review Rule prohibits certain deceptive or unfair conduct related to the use of reviews and testimonials for goods and services. To date, the FTC has alleged a Consumer Review Rule violation in one case – Growth Cave, which focused on false earnings claims and included allegations that the company disseminated testimonials from individuals who did not disclose their employment relationship.

The FTC’s press release and related materials include a template warning letter but do not disclose the recipients’ identity or detail the alleged conduct. The template does provide an example of a violation, which includes “providing compensation to your employees in exchange for the employee obtaining 5-star reviews from friends and family, and obtaining reviews from individuals who did not have experience with the company’s products or services.” The template references distribution within a “multi-office practice group,” suggesting that at least one recipient may be a professional services organization, such as a law firm or medical office.

The warning letters are notable because they indicate that the FTC intends to use its limited resources to enforce the Consumer Review Rule in a focused way and not simply when potential violations happen to arise in an investigation about unrelated conduct.

2. The Rytr case and the FTC’s vacated consent order

In an exceptionally rare move, the FTC also reopened a settled matter and vacated a consent order on its own initiative. The case involved Rytr LLC, which was part of a 2024 enforcement action known as Operation AI Comply. Rytr allegedly marketed several AI “writing assistant” tools to businesses, with one tool designed to generate consumer reviews.

The FTC’s complaint stated that this tool generated detailed reviews with material details unrelated to user input and that users who then put that content online would be posting reviews that were certainly false. The complaint gave several examples of what users did, such as one that “generated over 83,000 reviews for various specific packing and moving services.” However, the complaint did not allege that any Rytr-generated reviews appeared online. FTC Chairman Andrew N. Ferguson (when he was previously Commissioner) and former Commissioner Melissa Holyoak dissented from the original decision, arguing that the allegations did not amount to deception or unfairness under the FTC Act, and that the case imposed burdens on AI innovation.

Why did the FTC revisit the order?

The recent order cites the White House’s July 2025 AI Action Plan, which directed the FTC to “review all FTC final orders, consent decrees, and injunctions, and, where appropriate, seek to modify or set aside any that unduly burden AI innovation.” Given the prior dissents, the Rytr case was a candidate for review.

The December 22 order also explains that the FTC “may vacate a prior order on its own initiative when it determines the order is contrary to the public interest.” Since Chairman Ferguson and FTC Commissioner Mark Meador found no violation and believe the 2024 order burdens AI innovation, the new order concludes that setting aside the 2024 order serves the public interest. The new order also details why the alleged facts do not, in their view, violate the FTC Act, including interpretations of the limits of liability for providing the “means and instrumentalities” to deceive others. Although this action is notable, similar outcomes may be unlikely in other AI-related FTC cases, as that is the only AI-related case Chairman Ferguson has voted against while at the Commission. However, the FTC remains active in matters involving AI marketing claims and children’s interactions with companion chatbots.

At a recent conference, Chairman Ferguson stated that the agency has “found that, not infrequently, the representations those [AI] companies are making are wildly inaccurate.” Discussing the importance of allowing for AI innovation, he gave the caveat that “we also don't want to create a system where the tradeoff for innovation is children.” He also contrasted the promise of AI in healthcare and defense with the “incredible amount of this capital…being invested in these chatbots” that “primarily are another mechanism for hoovering up data.”

Further, as described in another recent DLA Piper alert, the recent Executive Order calling for a national AI framework directs the FTC to issue a policy statement on how the FTC Act’s prohibition on unfair and deceptive acts applies to AI models. That policy statement may provide further clarity on how the FTC views its AI-related authority and enforcement priorities.

Key takeaways

While the two actions announced on December 22 are unrelated, both address matters involving consumer reviews. The FTC’s ongoing focus on the integrity of the review ecosystem is also reflected in its recent finalization of an earlier order against NextMed. As explored in a prior DLA Piper alert, that case involves certain review practices that had not been addressed in prior cases. Companies are encouraged to closely monitor any new FTC activity regarding reviews and, of course, the sometimes connected and somewhat inescapable topic of AI.

For more information, please contact the authors.

Print