15 September 20255 minute read

AI-related securities class action filings are on the rise: Key observations

In the last five years, more than 50 securities class action lawsuits have been filed alleging that defendants made false or misleading statements related to artificial intelligence (AI).

The number of AI-related cases filed each year is trending upwards. While there were 7 securities class action cases related to AI disclosures filed in 2023, that number doubled to 14 in 2024, and 12 have been filed so far in 2025.

To a large extent, these cases follow familiar plaintiff-side playbooks, focusing on risk disclosures, the alleged reasons for disappointing financial results, or appear to be attacks on business strategy disguised as claims under the federal securities laws. Several of the cases followed short-seller reports, another current trend. While some complaints focus on alleged misrepresentations regarding the technology itself, many allege that AI was not driving revenue or market demand as claimed. Plaintiffs seem to be trying to capitalize on the pressure companies face to demonstrate market leadership or innovation in the AI space to assert securities class action claims.

Given this environment, companies should be aware of the types of disclosures that may lead to increased litigation risk. Below are trends we are seeing in the ways plaintiffs are challenging AI-related disclosures.

These observations are based on complaints filed since the beginning of 2024 alleging violations of the federal securities laws and including some reference to AI, machine learning, or related technologies.

1. Alleged exaggeration of AI capabilities ("AI washing")

A recurring theme in securities class action cases involving AI disclosures is that companies allegedly overstated the sophistication, effectiveness, or uniqueness of their AI, machine learning, or related technologies. In some cases, plaintiffs allege that companies concealed reliance on manual labor, third-party tools, or non-AI solutions while marketing offerings as AI-driven, a practice often dubbed “AI washing.”

For example, plaintiffs in an action against Innodata allege that the company claimed to have advanced AI platforms and capabilities, but in reality, its operations allegedly relied heavily on offshore manual labor and its AI technology was rudimentary.

Similarly, Oddity Tech allegedly touted proprietary AI-driven product matching, but its technology allegedly was a basic questionnaire, and not “true” AI. Plaintiffs allege that Evolv Technologies claimed its AI technology could “reliably detect” weapons despite allegedly failing to detect certain knives, bomb components, and all micro-compact pistols. A complaint against Skyworks Solutions alleges that Skyworks exaggerated its ability to leverage AI within the iPhone upgrade cycle.

2. Allegedly misleading statements about AI-driven revenue or market demand

Several complaints allege that companies attributed their growth, customer retention, or competitive advantage to their AI or machine learning technologies when the actual business drivers were traditional marketing, aggressive sales tactics, or unrelated factors.

Complaints against certain companies, for example, allege weak demand for AI features, and that the shift to AI offerings cannibalized higher-margin business lines.

Similarly, plaintiffs allege that Tempus AI touted itself as an AI healthcare company but had little history of generating significant revenues from AI technologies, and generated most of its revenues from acquisitions, genomic testing, and data licensing agreements.

3. Alleged concealment of AI limitations and challenges

Plaintiffs also complain that defendants allegedly failed to disclose limitations, technical challenges, or delays in developing or deploying AI solutions. For example, plaintiffs have alleged that companies did not reveal execution challenges and delays in rolling out AI platforms, or concealed the technical and competitive limitations of their AI platforms.

4. Allegedly false claims of third-party validation or independence

A few complaints contend that companies falsely claimed independent, third-party validation of their AI technologies. Evolv Technologies, for example, disclosed that its AI-based weapons detection was validated by independent testing, but plaintiffs allege the company manipulated test results and was heavily involved in the reporting process. Other complaints allege that companies omitted any meaningful discussion of how their AI was tested or benchmarked, allegedly leaving investors and customers with an inflated sense of reliability.

5. Alleged concealment of increased costs or negative financial impact from AI initiatives

Perhaps not surprisingly given inherent uncertainties in new technology development, some complaints allege that companies downplayed or failed to disclose the significant costs, margin pressures, or negative financial impacts associated with developing, maintaining, or integrating AI technologies. Xiao-I, for example, allegedly understated the negative impact of increased R&D expenses required to compete in the AI industry.

6. Alleged failure to disclose material risks and known trends related to AI

Plaintiffs also have seized on allegedly “boilerplate” risk disclosures regarding the technical, legal, or financial risks associated with their AI initiatives. As is common in securities class action complaints, plaintiffs also allege that companies failed to update risk disclosures, couching risks as hypothetical when they had already come to pass.

7. Allegedly misleading statements about AI research, development, and investment

Some complaints allege that companies misrepresented the extent of their investment in AI research and development, the size and expertise of their AI teams, or the progress of their AI projects. For example, plaintiffs allege that Innodata did not have the resources to develop AI, as there allegedly were only eight employees at the company and the company allegedly had not increased spending on research and development of AI technology.

Takeaways

Companies across sectors and industries are incorporating AI into their business solutions and product offerings. The threat of lawsuits, including private securities class actions, need not hamper innovation or deter companies from communicating with investors. Indeed, the mere fact that a complaint is filed does not mean it will survive a pleading-stage challenge.

Working with cross-functional teams that understand the technology, the business, potential liability, and litigation trends can help companies mitigate risks related to AI-related disclosures in their public statements.

For more information, please contact the authors.

Print