Data privacy bill in Congress would create federal enforcement over algorithms

New provision would govern design evaluation and impact of algorithms on civil rights

US Capitol

AI Outlook

AI Outlook

By:

Legislation now being considered by the US Congress to create a comprehensive national data privacy and security framework includes a section that would take a significant step toward a federal enforcement mechanism over how businesses design and employ algorithms, and the underlying data used to support them.

The American Data Privacy and Protection Act (ADPPA – HR 8152), introduced in the House of Representatives on June 21, would establish national standards on what types of data companies can gather from individuals and how that information is used.  This pending bill has support among a number of key members of Congress, and while it is challenging to predict what legislation may or may not become law, a continuous emerging theme, from both sides of the aisle, is the increased attention to algorithms among policymakers as well as the growing role they play in our lives.

As discussed in this June 2 DLA Piper Data Protection, Privacy and Security Alert, when the emerging bipartisan compromise proposal was still in a draft form, the pending legislation would largely preempt most state privacy laws and would include a limited private right of action for individuals.

Enforcement of the Act would be under the jurisdiction of the Federal Trade Commission (FTC), an agency that has taken an aggressive posture toward the tech industry during the Biden Administration. In addition to enlisting some of the existing sub-agencies in the FTC’s enforcement arsenal, the legislation would create “a new bureau comparable in structure, size, organization, and authority to the existing Bureaus within the Commission related to consumer protection and competition” to carry out the new requirements.

Algorithms and civil rights

The ADPPA includes a provision, Section 207, Civil Rights and Algorithms, under which covered entities may not collect, process, or transfer covered data in a manner that discriminates against individuals on the basis of race, color, religion, national origin, sex or disability.

Large data holders – defined as those that have annual revenues of at least $250 million and collect covered data on more than 5 million individuals (or sensitive data on more than 200,000 individuals) – would be required to perform an annual “algorithm impact assessment.”

The scope of this impact assessment would have to include a detailed description of the design process and methodologies of the algorithm; a statement of the purpose, proposed uses, and foreseeable capabilities outside of the articulated proposed use of the algorithm; a detailed description of the data  used by the algorithm, including the specific categories of data that will be processed as input and any data used to train the model that the algorithm relies on; a description of the outputs produced by the algorithm; and an assessment of the necessity and proportionality of the algorithm in relation to its stated purpose, including reasons for the superiority of the algorithm over nonautomated decision making methods.

Finally, the impact assessment must document in detail steps taken to mitigate any potential harms of large data holders’ algorithms to children under the age of 17; regarding making or facilitating advertisements for, or determining access to, or restrictions on the use of housing, education, employment, health care, insurance, or credit opportunities; access to public accommodations; and disparate impacts based on race, color, religion, national origin, sex, or disability status.

Large data holders would have to produce their first impact assessment within two years of enactment of the legislation, and thereafter on an annual basis.

All covered entities and service providers also would be required to conduct an “algorithm design evaluation” to reduce the risk of harm. A “covered entity or service provider that knowingly develops an algorithm, solely or in part, to collect, process or transfer covered data or publicly available information shall prior to deploying the algorithm in interstate commerce evaluate the design, structure, and inputs of the algorithm, including any training data used to develop the algorithm, to reduce the risk of the potential harms” identified in the “algorithm impact assessment” description above.

The deadline for providing the design evaluation would be two years from enactment into law.

The legislative text also calls on covered entities to use the services of an “external, independent auditor or researcher” to conduct an impact assessment or a design evaluation. Although the bill language does not spell out the qualifications for conducting an external audit, that kind of work is typically performed by law firms, accounting firms or other professional service providers with the expertise and standing to conduct an impartial review.

Assessments and evaluations would have to be submitted to the FTC no longer than 30 days after completion, and, upon request, be made available to Congress and also “publicly available in a place that is easily accessible to consumers.” Trade secrets may be redacted and separated from public disclosure.

Guidance on compliance with the new requirements would have to be published within two years of enactment into law by the FTC, in consultation with the Department of Commerce.

Who and what is “covered”?

“Covered entity” is defined as “any entity or any person, other than an individual acting in a non-commercial context, that alone or jointly with others determines the purposes and means of collecting, processing, or transferring covered data and,” is subject to the Federal Trade Commission Act, and is a common carrier subject to the Communications Act of 1934 (and subsequent revisions). Nonprofit organizations and entities under common control with, or sharing common branding with, another covered entity are also covered.

Government agencies at the federal, state and local levels, and persons engaged in data collection and processing on behalf of a government, are exempt from the law.

“Covered data” refers to “information that identifies or is linked or reasonably linkable, alone or in combination with other information, to an individual or a device that identifies or is linked or reasonably linkable to an individual, and may include derived data and unique identifiers.”

And “algorithm” is defined as “a computational process, including one derived from machine learning or artificial intelligence techniques, that makes or facilitates a decision or facilitates human decision-making with respect to covered data, including to determine the provision of products or services or to rank, order, promote, recommend, amplify, or similarly determine the delivery or display of information to an individual.”

Legislative prospects

Given the general atmosphere of partisan polarization on Capitol Hill these days, the pending data privacy bill is a relatively rare example of bipartisan consensus on a major issue. The bill has the support of some leading members of key committees with jurisdiction over tech issues in the House and Senate from both parties.

The ADPPA passed by a unanimous voice vote in a June 23 markup in the House Energy and Commerce Committee, Subcommittee on Consumer Protection and Commerce. The bill is co-sponsored on a bipartisan basis by Reps. Frank Pallone, Jr. (D-NJ) and Cathy McMorris Rodgers (R-WA), respectively chair and ranking member of Energy and Commerce Committee, and by Reps. Jan Schakowsky (D-IL) and Gus Bilirakis (R-FL), chair and ranking member of the Consumer Protection Subcommittee.

On the Senate side, the ADPPA has the backing of Sen. Roger Wicker (R-MS), ranking member of the Senate Commerce, Science, and Transportation Committee.

But one prominent name that is not on the list of Congressional supporters is Senator Maria Cantwell (D-WA), chair of the Senate Commerce Committee, whose support is considered pivotal to a final agreement. Senator Cantwell has reportedly circulated an updated version of competing legislation she has previously sponsored, known as the Consumer Online Privacy Rights Act, to industry and privacy advocacy groups for feedback and input.

With so many potential regulations and laws on the horizon, it is challenging to predict which will come to pass.  But what emerges as a constant theme across these efforts, from both sides of the aisle, is the increased attention to algorithms and the growing role they play in our lives.  “Algorithm impact assessments” and “algorithm design evaluations” have become commonplace technology vernacular in Washington, DC.  Other common themes reflected in the proposed legislation include preventing algorithmic bias and discrimination based on protected characteristics, as well as placing regulatory focus on algorithms that control or restrict access to vital goods and services like lending and healthcare.  Companies adopting AI and other algorithm-based technologies would be well advised to expect some form of regulation, possibly through FTC and other agencies , and design in advance toward best practices of representative data, controls against bias, and industry standard testing, validation, and total product lifecycle monitoring and validation. 

DLA Piper will continue to monitor these developments on the Hill, in the common law, and across industries to guide its clients forward.  Please contact any of the authors to learn more.

Learn more by contacting any of the authors, and visit our Artificial Intelligence Practice hub.