Add a bookmark to get started

20 September 20225 minute read

A new tech regulatory agency? Key senator calls for more enforcement “teeth”

Senator Lindsey Graham (R-SC) raised eyebrows around Washington last week when he suggested that he and colleagues are working on creating a “regulatory environment with teeth” to police, and perhaps license, social media companies.

Social media companies have witnessed a spate of recent litigation activity claiming algorithmic targeting of minors, intentional addiction of users, and amplification of harmful messages through content-neutral algorithms. These cases lob creative theories designed to circumvent the longstanding protections of Section 230 of the Communications Decency Act, 47 U.S.C. § 230 (Section 230), the First Amendment, and other defenses. Meta and other defendants are challenging whether social media platforms are even products subject to product liability claims at all.

Other litigation has focused on the political impact of social media. In May 2022, Karl A. Racine, attorney general for the District of Columbia, filed suit against Mark Zuckerberg personally, alleging “Mr. Zuckerberg contributed to Facebook’s lax oversight of user data and implementation of misleading privacy agreements,” allowing third parties to “use that data to manipulate the 2016 election.”

Disputes over the ongoing advisability of Section 230 and other hands-off approaches to social media have long fueled speculation that new regulation might be imminent. That day may now be closer than ever.

Speaking at a Senate Judiciary Committee hearing on Tuesday, Graham said he was working with Senator Elizabeth Warren (D-MA) to create a process to regulate the tech sector. Senator Josh Hawley (R-MO) is also said to be involved in the incipient proposal. A Congressional aide confirmed that Warren and Graham are working together on the issue, but that a final agreement is not imminent.

“Elizabeth and I have come to believe that it’s now time to look at social media platforms anew,” Graham said at the hearing (see this video clip), stressing the importance of bipartisanship.

“We're gonna create a system more like Europe– a regulatory environment with teeth,” Graham said. He said the potential new enforcement regime would have the “power to deal with privacy issues, content moderation.” Companies operating in this space would have to “harden [their] sites against foreign interference” and “protect [their] sites against criminality,” the senator said, adding that there would be an independent appeals process for those who believe their content was wrongly taken down.

Graham noted that the current regulatory body overseeing the industry was created in 1914 – presumably a reference to the Federal Trade Commission (FTC) – and had not kept pace.

While this and other proposals work their way through the legislative process, regulators are not waiting around to assert their jurisdiction over social media and other algorithmic risks. The FTC has asserted its jurisdiction over AI, noting that its mandate to curtail “unfair or deceptive practices” under the existing FTC Act “would include the sale or use of – for example – racially biased algorithms.”

Specific details of the new agency proposal have not been released.

Meanwhile, in Texas

Adding to the simmering pot, while Congress is mulling legislation moderating social media, the US Court of Appeals for the Fifth Circuit has just upheld Texas House Bill 20, a state statute that “prohibits large social media platforms from censoring speech based on the viewpoint of its speaker.” NetChoice, LLC v. Paxton, No. 21-51178, 2022 WL 4285917 (5th Cir. Sept. 16, 2022).

As the panel put it: “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.”

The law regulates platforms with more than 50 million monthly active users, describing them as “common carriers by virtue of their market dominance.” HB 20 forbids censoring by platforms based on “(1) the viewpoint of the user or another person; (2) the viewpoint represented in the user’s expression or another person’s expression; or (3) a user’s geographic location in this state or any part of this state."

The law does seek to carve out and permit regulation of certain speech including speech that “directly incites criminal activity or consists of specific threats of violence targeted against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge.” Notably, it does not expressly carve out censoring of violent threats against a person because of their viewpoints, however, the law also allows censorship of “unlawful expression,” defined as “an expression that is unlawful under the United States Constitution, federal law, the Texas Constitution, or the laws of this state, including expression that constitutes a tort under the laws of this state or the United States.” This exception both naturally broadens the scope of permissible moderation and portends litigation over what qualifies under different facts.

Section 2 of the law requires covered platforms to publish a “biannual transparency report” and establish an internal appeals process for users. Perhaps in tension with growing legislative momentum against Section 230, the Fifth Circuit invoked Section 230 as one basis for concluding that these platforms were mere content hosts, not content editors with First Amendment interests over users’ speech. This opinion is further in tension with the Eleventh Circuit’s ruling in NetChoice, LLC v. Att’y Gen. of Fla., 34 F.4th 1196 (11th Cir. 2022), foretelling more to come. See NetChoice LLC v. Paxton, 2022 WL 4285917, at *38 (“We part ways with the Eleventh Circuit, however, on three key issues.”).

We will continue to monitor these developments. To find out more about the implications for your business, please contact either of us.

Print