
8 August 2025 • 3 minute read
State action targets use of biased AI underwriting models: Key points
On July 10, 2025, the Massachusetts Attorney General announced a $2.5 million settlement with Earnest Operations, a student loan company, for allegedly failing to mitigate risks of disparate harms to Black, Hispanic, and non-citizen applicants and borrowers from the use of artificial intelligence (AI) underwriting models.
Earnest had used these algorithmic models for lending decisions, including determinations on eligibility, loan terms, and pricing. The Attorney General alleged that the company’s failures violated consumer protection and fair lending laws.
The Assurance of Discontinuance, used in place of a court order, requires the company to develop and maintain a detailed governance structure for covered AI models, setting out specific requirements regarding written policies, risk assessments, testing, inventories, documentation, and an oversight team.
State and federal enforcement
The settlement is one of the first state enforcement actions involving a company’s use of algorithmic tools with allegedly discriminatory impact. It is likely not the last. State enforcers have been concerned about biased lending outcomes for some time, and the use of AI tools has heightened these concerns. Attorneys General in California, Oregon, and New Jersey have all issued guidance related to AI-related bias and discrimination, including with respect to credit and loan decisions.
Recent cases, including Earnest, could suggest a possible increase in state enforcement surrounding the sale or use of AI tools – a topic explored at a recent DLA Piper event. In June, the DC Attorney General settled a case against a property management company for alleged conspiracy to inflate rents using algorithmic pricing. Last year, the Texas Attorney General settled with a healthcare technology company, resolving allegations of deceptive claims about the accuracy of its healthcare AI products.
However, federal agency action addressing discriminatory outcomes from corporate use of AI products may be less likely. In a 2024 case involving an auto dealer, Andrew Ferguson of the Federal Trade Commission (FTC), who was then a commissioner and is now the Chairman, stated that he did not see discrimination claims as falling under the FTC Act. In April 2025, President Donald Trump issued an Executive Order stating that it was federal policy to “eliminate the use of disparate-impact liability” and directing agencies to “deprioritize enforcement” accordingly.
Key takeaways for companies
Despite the federal landscape, companies using AI or algorithmic tools for consequential decisions affecting consumers are encouraged to consider possible litigation risk. Some AI-specific international and state laws may apply, and private suits or class actions remain possible. In cases like Earnest, state enforcers have further demonstrated a willingness to use traditional authorities to investigate these matters.
The Earnest case and, specifically, the AI governance structure that the company is required to implement, may be informative for companies seeking to reduce their own exposure. The features of that structure, including written policies, risk assessments, testing, inventories, documentation, and oversight, are often key elements of effective AI governance programs in line with emerging regulatory regimes and industry standards.
For more information, please contact the authors.


