Add a bookmark to get started

28 June 202213 minute read

Data privacy bill in Congress would create federal enforcement over algorithms

New provision would govern design evaluation and impact of algorithms on civil rights

“Manipulation.” “Targeting.” “Persuasion.”

These are some of the terms frequently invoked in the nation’s capital when it comes to the growing use of automated decision-making by social media and other technology companies and its impact on the lives of millions of people in the US and globally.

Congressional concern over these issues has recently ramped up in the wake of the fall 2021 testimony by Facebook whistleblower Frances Haugen and the subsequent release of the “Facebook Papers,” a trove of documents revealing the company’s own internal research findings on the platform’s political and societal effects.

How platforms use algorithms to target users with specialized content is an issue receiving bipartisan attention in both chambers of the Capitol. Whether that uptick in consideration will translate into concrete statutory action remains to be seen, but the volume of new legislation and interest on the topic does increase the likelihood that some sort of legislative action will be taken this year. 

One trend is emerging from recent hearings and the legislation piling up in the hopper: many elected officials (and their staffers) are becoming more familiar with how social media companies operate as a business model and increasingly understand the implications the technology is having on consumers.

In recent years, members of Congress sometimes seemed out of place when confronting the leaders of technology companies at high-profile hearings. The characterization was always somewhat unfair, as a cadre of lawmakers has for some years been brainstorming about the implications of AI and warning their colleagues and constituents about the need to create a regulatory regime – essentially from scratch. That corps of engaged and knowledgeable lawmakers is growing ever larger.

While it is still to be determined precisely what form the nascent regulatory regime takes, it is a safe bet that businesses will have to be prepared for new compliance responsibilities and transparency requirements.

Recurring legislative themes

Through 2021, more than 30 bills were introduced in the House and Senate with a prominent focus on the regulation of algorithms. And that doesn’t include countless additional references to algorithms, AI, machine learning and automated decision-making embedded in a wide range of appropriations and authorization bills, a reflection of the prevalence of these technologies in so many domains.

Two of the features that many of the 2021 legislative proposals have in common are:

Expanded role for the FTC. In the preponderance of proposals, the Federal Trade Commission is the agency charged with enforcing the provisions of the legislation and promulgating regulations, given its responsibilities over consumer protection and promotion of competition. For example, bipartisan Senate legislation announced on December 9 (but not yet introduced), the Platform Accountability and Transparency Act (PATA), would require social media companies to provide vetted, independent researchers and the public with access to certain platform data. Under the draft bill circulated by the Senate sponsors, companies that fail to comply would be subject to enforcement from the FTC and face the potential loss of immunity under Section 230 of the Communications Decency Act.  Additionally, the bill would give the FTC the authority to require that platforms proactively make certain information available to researchers or the public on an ongoing basis, such as a comprehensive ad library with information about user targeting and engagement.

  • Some observers have questioned whether the FTC is the agency best positioned to helm this new digital enforcement regime, rather than a new, dedicated agency. Former FTC Chair Tom Wheeler wrote last year: “Oversight of digital platforms should not be a bolt-on to an existing agency but requires full-time specialized focus.” Wheeler is part of a group of former regulators who have proposed the creation of “a new approach that replaces industrial era regulation with a new, more agile regulatory model better suited for the dynamism of the digital era.”
  • But the sentiment among lawmakers looking to enact legislative solutions in the nearer term seems to be that working within the existing structures is preferable to, or more expeditious than, creating an entirely new regulatory architecture.
  • Meanwhile, the Biden Administration, through its appointees and policy changes, is positioning the FTC as a check on technology companies. And in addition to the legislative proposals discussed in this overview, members of Congress continue to call on the FTC to flex its existing investigative, regulatory and enforcement muscles to promote stronger algorithmic accountability.

Section 230 reform. The aforementioned provision in the 1996 Communications Decency Act generally provides immunity for website platforms that publish information from third-party content. Legislators from both parties have called for Section 230 to be altered or overhauled in response to recent events, though the issue is often approached from different ideological perspectives by representatives of the respective political parties.

  • Republicans have argued that major platforms have applied bias in restricting conservative content. Indeed, former President Trump vetoed the fiscal year 2021 National Defense Authorization Act because it did not “terminate” Section 230, and in 2020 he signed an executive order targeting this legal shield that Internet companies rely on to protect them from liability for user-created content.
  • President Biden said as a candidate that he supported “revoking” Section 230, but Congressional Democrats have generally favored a more targeted approach to address protections for harmful conduct on online platforms rather than on user speech – using “a scalpel instead of a jackhammer,” in the words of Representative Anna Eshoo (D-CA), whose district includes Silicon Valley.
  • Senator Maria Cantwell (D-WA), who chairs the Commerce, Science and Transportation Committee, the key Senate committee with jurisdiction over digital regulatory issues, said at a December 9 hearing on “Disrupting Dangerous Algorithms” that when Section 230 was enacted, as lawmakers first confronted the challenge of online content moderation, “I don’t think we really ever thought that it was going to be automated. Now, algorithms have buried choices in them, they are in fact editorializing.”
  • Chair Frank Pallone (D-NJ) and other members of the House Energy and Commerce Committee, the key House committee with jurisdiction over digital regulation, in October introduced the Justice Against Malicious Algorithms Act (HR 5596), which would amend Section 230 to remove absolute immunity in certain instances, such as when an online platform knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury.
  • Senator John Thune (R-SD), the ranking member of the Commerce Committee’s Communications, Media, and Broadband Subcommittee and lead co-sponsor of the PACT Act (see below), said Section 230 “is ripe for reform,” adding that there is “a bipartisan concern that social media platforms are often not transparent and accountable enough to consumers with respect to the platform’s moderation of user-generated content.”
  • Haugen, the former Facebook product manager who has testified before the House and Senate Commerce committees, has argued that Congress should amend Section 230 to hold companies accountable for the algorithms they control that determine or intensify the content that users see – rather than focusing on user-generated content, which technology companies have less control over. (Haugen did not endorse any particular legislation in her congressional testimony.)
  • Legislating online content and amending Section 230 liability protections will require lawmakers to walk a fine line between enforcement against certain content, on the one hand, and First Amendment, free speech and constitutional concerns on the other.

Other key proposals

As suggested above, many of the bills in the House and Senate that have been proposed to regulate how algorithms are used and controlled have been sponsored by the chairs and ranking minority members of the most influential committees of jurisdiction, enhancing the chances that at least some of these initiatives have a genuine chance of enactment. In addition to the bills cited above, here is a look at just a few of the growing number of proposals that are currently pending in the House and Senate.

  • The Algorithmic Justice and Online Platform Transparency Act (HR 3611 and S 1896): The legislation would prohibit algorithmic processes on online platforms that discriminate on the basis of race, age, gender, ability and other protected characteristics; establish a safety and effectiveness standard for algorithms; require online platforms to describe to users in plain language the types of algorithmic processes they employ and the information they collect to power them; and require online platforms to maintain detailed records describing their algorithmic process for review by the FTC, in compliance with key privacy and data de-identification standards. The bill is sponsored in the House by Representative Doris Matsui (D-CA), a member of the Energy and Commerce Consumer Protection Subcommittee, and in the Senate by Senator Edward Markey (D-MA), a member of the Commerce Subcommittee on Communication, Media, and Broadband
  • The Filter Bubble Transparency Act (S 2024 and HR 5921): This bill would establish requirements for large online platforms that use algorithms applying AI or machine learning to user-specific data to determine the manner in which content is displayed to users. Platforms must notify users that the platform uses such data and make a version of the platform available that uses only user-specific data that has been expressly provided by the user and which enables users to switch between the two platforms. These requirements do not apply to search engines operated by downstream providers with fewer than 1,000 employees and that have an agreement to access an index of web pages from an upstream provider, though upstream providers would be required to make their algorithm available to downstream providers as part of such an agreement. The Senate bill is sponsored by Senator Thune with bipartisan co-sponsorship. The House bill was introduced by Representative Ken Buck (R-CO).
  • The Justice in Forensic Algorithms Act of 2021 (HR 2438): A number of lawmakers are concerned about algorithms’ applications in law enforcement, such as the use of facial recognition technology. This bill establishes a federal framework to govern the use of computational forensic software that relies on an automated computational process to assess evidence in a criminal investigation. Elements of the framework include requirements for the establishment of testing standards and programs, requirements for the use of computational forensic software by federal law enforcement agencies and related entities such as crime labs, a ban on the use of trade secret evidentiary privilege to prevent federal criminal defendants from accessing evidence collected using computational forensic software or information about the software, such as source code, and limits on the admissibility of evidence collected using computational forensic software. Representative Mark Takano (D-CA) is the sponsor.
  • The PACT Act (S 797): The bipartisan Platform Accountability and Consumer Transparency (PACT) Act, like many of the pending measures, was also introduced in previous sessions of Congress but did not advance. The bill would update Section 230 to require that large online platforms remove court-determined illegal content and activity within four days and would exempt the enforcement of federal civil laws from Section 230 so that online platforms cannot use it as a defense when federal regulators like the FTC and the Department of Justice pursue civil actions. Online platforms would be required to explain their content moderation practices in an acceptable use policy that is easily accessible to consumers. The bill is sponsored by Senator Brian Schatz (D-HI), a Commerce Committee member and leader on AI issues, with bipartisan co-sponsors, among them Senator Thune.
  • The Protecting Americans from Dangerous Algorithms Act (S 3029 and HR 2154): If social media companies promote extremist content on their platforms that leads to offline violence, such as interference with civil rights or acts of international terrorism, their immunity from liability would be limited by this bill. The bill narrowly amends Section 230 to remove liability immunity for a platform if it uses an algorithm to amplify or recommend the proscribed content. Senator Ben Ray Luján (D-NM), who chairs the Communications Subcommittee on Senate Commerce, and Representative Tom Malinowski (D-NJ) are the lead sponsors.
  • The Social Media DATA Act (HR 3451): The Social Media Disclosure And Transparency of Advertisements (DATA) Act of 2021 would require the FTC to issue regulations that require large digital advertising platforms to maintain and grant academic researchers and the FTC access to ad libraries that contain specific data on advertisements in a searchable, machine-readable format. The ad library must include details about the advertisements, such as the ad targeting method, descriptions of the targeted audience for each advertisement, and the language contained within the ad. The bill also would require the FTC to convene a working group of stakeholders to provide guidance to Congress and the public on a set of best practices for social media research. Representative Lori Trahan (D-MA), a member of the Energy and Commerce Committee, is the sponsor.

In addition, legislation known as the Algorithmic Accountability Act of 2019, introduced in the previous session of Congress in the House and Senate, represented one of one of the most extensive regulatory approaches to AI ever introduced at the federal level. The proposal required specified commercial entities to conduct assessments of high-risk systems that involve personal information or make automated critical decisions, such as systems that use AI or machine learning. These include decision systems that may contribute to inaccuracy, bias, or discrimination or may facilitate decision-making in all sectors about sensitive aspects of consumers’ lives by evaluating consumers' behavior. The bill failed to advance in the previous Congress, but its sponsors have indicated they are planning to reintroduce the measure in the near future.

With so many potential regulations and laws on the horizon, it is challenging to predict which will come to pass.  But what emerges as a constant theme across these efforts, from both sides of the aisle, is the increased attention on algorithms and the growing role they play in our lives.  Companies adopting such technologies would be well advised to expect some form of regulation, likely through FTC and other agencies with overlapping jurisdiction, and design in advance toward best practices of representative data, controls against bias, and industry standard testing, validation, and total product lifecycle monitoring and validation.  DLA Piper will continue to monitor these developments on the Hill, in the common law, and across industries to guide its clients forward.

Learn more by contacting any of the authors, and visit our Artificial Intelligence Practice hub. 

Print