The President’s claim that Twitter censored him by fact-checking his tweets alleging systematic vote fraud in mail-in ballots and the ensuing May 28 White House “Executive Order on Preventing Online Censorship” (hereafter “the EO”), have created considerable political sound and fury. But relatively little attention has been given to the practical effect of this Executive Order.
The EO says its purpose is to prevent restrictions on political debate on communications platforms. The EO is directed at finding legal avenues under which platforms that engage in such restrictions can lose the immunity from civil and state criminal liability for content provided by third parties that “interactive computer services” lose under 47 U.S.C. Section 230.
Section 230 immunities apply both to providers or users of online sites, search engines and applications and to ISPs, among others, for content that is provided and developed by third parties. With narrow exceptions for intellectual property and federal criminal law violations, Section 230 shields interactive computer services and their users from liability for (1) third party-provided content; or (2) for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” 47 U.S.C. §§ 230(c)(1) and (2)(A).
The EO focuses on the second of these two protections, attempting to generate limits. It asserts that:
when an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.
Section 230 immunity turns on whether content was “provided by another information content provider,” and does not indicate that editorial content should be treated differently under the statute. It also appears to give broad leeway to providers to take down content that a provider “considers to be . . . otherwise objectionable,” without forfeiting immunity. However, Section 230(c)(2)(A) is limited to actions “taken in good faith.”
The EO announces four initiatives to advance its purposes. The initiatives generally apply to a broad range of “online platforms,” defined as: “any website or application that allows users to create and share content or engage in social networking, or any general search engine.” Sec. 7. However, a few of the requirements apply only to large online platforms.
1. Suggested FCC rulemaking. The first initiative is the filing within only 30 days of a petition by NTIA with the FCC (which, as an independent agency, is not directly subject to the EO), asking it to issue rules attempting to narrow the interpretation of Section 230 in four respects:
First, to issue rules regarding the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) would lose protection under subparagraph (c)(1) because (c)(1) does not address editorial actions. If followed by a court, this theory would potentially create significant risk for interactive computer services that moderate or restrict content in a way not expressly protected by the second prong of Section 230 immunity.
Second, to issue rules spelling out conditions under which an action restricting access to or availability of material is not “taken in good faith” within the meaning of subparagraph (c)(2)(A) of section 230, particularly if they are:
(A) deceptive, pretextual, or inconsistent with a provider’s terms of service (because the statutory standard is subjective − what the provider subjectively considers − only pretextual decisions appear consistent with the “good faith” standard in the statute;
(B) taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard (this condition is mentioned nowhere in the statute).
In addition, NTIA is invited to propose any other regulations that may be appropriate to advance the policy in the EO.
In contrast to the quick 30-day deadline to file the petition, there is no deadline for the FCC to decide whether to begin a rulemaking. As of this writing, it is unclear whether the FCC majority will undertake this rulemaking. But more importantly, the FCC has not ventured into this area before and it is unclear whether courts would give deference to its interpretation of a code provision that does not mention the agency.
2. FTC consideration of unfair and deceptive practices
The EO’s second initiative involves the Federal Trade Commission (another independent agency).
It first asks the FTC to consider whether “practices by entities covered by section 230 that restrict speech in ways that do not align with those entities' public representations about those practices” are deceptive trade practices. In theory the FTC could bring such a case, although it would need to show that the misrepresentation was material to consumers. Second, the EO asserts, without citing any controlling authority, that large social media platforms are public fora for First Amendment purposes. In fact, on May 27, 2020, a DC Circuit panel rejected this theory in a similar content in Freedom Watch v.Google, No. No. 19-7030, slip op, at 2 (D.C. Cir.). The EO directs the FTC to consider under this theory whether any of 16,000 complaints the White House Social Media Office has solicited and received regarding alleged censoring or otherwise taking action against users based on their political viewpoints of large social media platforms [including Twitter] “allege violations of law.” The EO also asks the FTC to consider whether to issue a report regarding the complaints.
The EO asks an independent agency to act under a contested legal theory. It is unclear if the agency will act on the request, or if an enforcement action the agency brought under a public forum theory would be successful in court.
3. Organizing State AG working group to target platform bias
Ironically, the provision of the EO that is likely to have the most significant impact is the convening of a DOJ-State Attorney General working group that could have been convened without issuing an Executive Order.
The focus of this working group will be on enforcement of state statutes that prohibit online platforms from engaging in unfair or deceptive acts or practices. The working group will consider the 16,000 allegations of political bias that the White House has solicited and received, as well as publicly available information regarding:
(i) increased scrutiny of users based on the other users they choose to follow, or their interactions with other users
(ii) algorithms to suppress content or users based on indications of political alignment or viewpoint
(iii) differential policies allowing for otherwise impermissible behavior, when committed by accounts associated with the Chinese Communist Party or other anti-democratic associations or governments
(iv) reliance on third-party entities, including contractors, media organizations, and individuals, with indicia of bias to review content and
(v) acts that limit the ability of users with particular viewpoints to earn money on the platform compared with other users similarly situated.
These topics invite further investigations by motivated State AGs at the suggestion of DOJ under some broadly drafted state unfair competition and unfair and deceptive trade practice laws.
The working group is also charged with developing a model state law for online platforms, and DOJ is to develop a federal legislative proposal to further the policy objectives of the EO.
4. Prohibition against federal government advertising on online platforms that engage in viewpoint discrimination
The final initiative is a DOJ review of online platforms where federal agencies advertise to assess whether any “are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.”
Finally, if the DoJ-State working group moves forward, there is a risk that one or more State AGs will pursue investigations of platforms based upon the criteria in the review. For this reason, platforms may want to follow the activity of the working group, review applicable state UDAP and unfair competition laws in states with State AGs who are vocal about this issue, and consider how they would respond to an inquiry from any of those State AGs.
But, as with many initiatives during an election year with political overtones, it is too early to tell if most or all of the EO initiatives will in fact move forward in earnest, or if instead their pendency operates as a sword of Damocles for large social media platforms, should they take down content favored by the Administration.
Learn more about the implications of the EO by contacting either of the authors.