Add a bookmark to get started

12 January 20247 minute read

Ensuring ethical AI: Key insights and compliance strategies from the Rite Aid FTC settlement

In December 2023, Commissioner Alvaro M. Bedoya of the Federal Trade Commission (FTC) released a statement on a recent enforcement action levied against Rite Aid Corporation relating to the company’s use of facial recognition technology, which produced biased and inaccurate results in contravention of the FTC Act.[1]

Commissioner Bedoya’s statement addresses Rite Aid’s alleged wrongful use of facial recognition technology, the Settlement Order establishing elected enforcement action, and guidance from the FTC encouraging companies to adopt comprehensive testing and monitoring regimes to avoid biased and inaccurate results of algorithmic decision-making systems.

Issued as part of the enforcement action settlement, both the Commissioner’s statement and the Settlement Order provide valuable regulatory guidance for AI facial recognition technology and any algorithmic decision-making technology that has the capability to make decisions that may impact lives.

Understanding the Rite Aid decision: A commissioner’s interpretive lens

Rite Aid enforcement action

The same day, the FTC filed an enforcement action against Rite Aid Corporation in federal district court, challenging the company’s deployment of facial recognition technology. The technology was utilized to identify individuals suspected of prior theft or other criminal activities in Rite Aid stores. The FTC’s enforcement action contends that the system was inherently flawed, generating unacceptable levels of error, including “produc[ing] thousands of incorrect matches.”[2] This led to several customers being wrongly searched and baselessly accused of criminal conduct.

The enforcement action further suggests a problematic pattern in which Rite Aid disproportionately implemented this technology in stores situated within predominantly non-White communities – a significant concern given the documented lower accuracy of facial recognition technology for people of color.[3]

In its formal complaint, the FTC highlighted Rite Aid’s neglect of crucial safeguards that could have mitigated consumer harm prior to the deployment of their facial recognition technology. The FTC emphasized the absence of critical preparatory steps, including:[4]

  • Conducting thorough risk assessments to evaluate the potential for consumer misidentification, particularly concerning racial or gender-based disparities

  • Rigorously verifying the technology's precision through comprehensive testing and documentation processes before its operational use

  • Implementing robust procedures to ascertain and maintain the high quality of data that informs and trains the algorithmic decision-making framework, and

  • Establishing ongoing oversight protocols to continuously monitor the technology's performance, with a focus on detecting and addressing any emergent bias or inaccuracies.

Under the terms of the Settlement Order, Rite Aid must cease the use of facial recognition for surveillance purposes for five years, during which time it is also required to expunge any biometric data gathered in association with the previously utilized system.[5]  Should Rite Aid opt to reinstate a facial recognition system after this period, the company is mandated to adhere to a series of stipulations prescribed by the Settlement Order, including:[6]

  • Proactively informing individuals of their inclusion in the facial recognition database and providing clear mechanisms for challenging their inclusion

  • Promptly notifying consumers of any adverse decisions influenced by the system and outlining procedures to contest such decisions

  • Conducting rigorous testing of the system to identify and rectify any statistically significant biases based on race, ethnicity, gender, sex, age, or disability, either individually or in concert

  • Executing annual testing under conditions that closely mirror the operational environment of the system to ensure sustained accuracy and fairness, and

  • Decommissioning the system if Rite Aid is unable to effectively mitigate the risks uncovered during assessments and testing processes.

Furthermore, the Settlement Order establishes provisions for the FTC “to conduct ongoing compliance monitoring.”[7]

Broader applicability

Commissioner Bedoya’s statements concerning the enforcement action underscore that the stipulations of the settlement and the proposed Settlement Order serve as “a baseline for what a comprehensive algorithmic fairness program should look like.”[8] Commissioner Bedoya interprets Section 5 of the FTC Act as obligating “companies using technology to automate important decisions about people’s lives . . . to take reasonable measures to identify and prevent foreseeable harms.”[9] This obligation is not confined to facial recognition and extends across all forms of automated decision-making technologies.[10]

Notably, the Commissioner recognizes the potential of such technologies to perpetuate historical injustices and acknowledges the potentially insidious nature of algorithmic bias. Consequently, FTC is proactively addressing incidents of algorithmic discrimination.[11]

Key takeaways

Commissioner Bedoya’s insights serve as a directive for businesses employing automated decision-making systems that may significantly influence individuals' lives, which extend beyond the realm of surveillance technologies. To forestall potential FTC enforcement action, companies are advised to institute a robust algorithmic fairness program encompassing the following measures:

  • Transparent consumer notification: Establish a transparent notification process to inform consumers when information about them is incorporated into an automated system's database, providing clear guidance on how they can challenge its inclusion. Businesses are advised to communicate with consumers regarding adverse determinations made by these systems and offer a straightforward dispute resolution process. Timely and effective response mechanisms to consumer challenges are imperative.

  • Rigorous and periodic system evaluation: Conduct systematic and frequent evaluations of the automated decision-making system for biases that may affect protected classes – such as race, ethnicity, gender, sex, age, or disability – whether individually or in combination. These evaluations may be most effective when conducted at least annually, and mimic the operational conditions of the system. It is important for identified risks to be promptly remedied, and, if such risks are irremediable, the deployment of the system may be reconsidered.

DLA Piper and AI system testing

DLA Piper’s team of lawyers and data scientists assist organizations in navigating the complex workings of their AI systems to ensure compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impacts on industry across the world.

Next steps

As part of the Financial Times’s 2023 North America Innovative Lawyer awards, DLA Piper’s was conferred the Innovative Lawyers in Technology award for its AI and Data Analytics practice.

DLA Piper’s AI policy team in Washington, DC is led by the Founding Director of the Senate Artificial Intelligence Caucus.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI strategy through our newly released AI Chatroom series.

For further information or if you have any questions, please contact any of the authors.



[1] See Complaint, Fed. Trade Comm’n v. Rite Aid Corp., No. 2:23-cv-5023 (E.D. Pa. 2023), https://www.ftc.gov/system/files/ftc_gov/pdf/2023190_riteaid_complaint_filed.pdf [hereinafter Enforcement Action].

[2] Alvaro M. Bedoya, Fed. Trade Comm’n, Statement of Commissioner Alvaro M. Bedoya: On FTC v. Rite Aid Corporation & Rite Aid Headquarters Corporation 8 (2023) (emphasis added), https://www.ftc.gov/system/files/ftc_gov/pdf/2023190_commissioner_bedoya_riteaid_statement.pdf.

[3] Enforcement Action, supra note 1, at 12.

[4] Id. at 10.

[5] See Bedoya, supra note 2, at 3.

[6] Id. at 3-4.

[7] Id. at 4.

[8] Id.

[9] Id. at 3 (emphasis added).

[10] Id.

[11] Id. at 5.

Print