SVG
Commentary
RealClear Markets

It's Time to Revisit Section 230 - But Not For the Reasons You Think

harold_furchtgott_roth
harold_furchtgott_roth
Senior Fellow and Director, Center for the Economics of the Internet
kirk-arner
kirk-arner
Legal Fellow, Center for the Economics of the Internet

Reacting to the perceived political bias of certain social media platforms, President Trump recently issued an executive order asking various federal agencies to review “Section 230.”

Few laws enjoy the status of being known simply as a number, but such is the aura of “Section 230.” Few people know it enough to call it “Protection for Private Blocking and Screening of Offensive Material,” Section 230’s formal title.

To most people, that title doesn’t sound like it gives the federal government the authority to review—much less censor—online political content, or online speakers based on the political content of their speech. And indeed, Section 230 provides no foundation for government review or censorship of any online material whatsoever.

Instead, Section 230 is a limitation on legal liability for “interactive computer services” that post information provided by third parties. It was passed as part of the Communications Decency Act, or CDA, a law that attempted to prevent children from viewing online material deemed obscene, indecent, or otherwise offensive. In 1997, following an inevitable legal challenge, the Supreme Court struck down most of the CDA in Reno v. ACLU on First Amendment grounds. But Section 230 survived.

In the year 2020—and indeed in the years 1996 and 1998 when Section 230 was written—the term “interactive computer service” was not and is not a term in common usage. Strangely, Section 230 repeatedly uses the phrase “the Internet and other interactive computer services,” as if the Internet were but one of many forms of interactive computer services. Other forms might have included electronic financial systems accessed by an ATM or a credit card system, not directly part of the Internet in 1996. In truth, the statutory definition of an “interactive computer service” appears to be modeled more on AOL and other nascent internet providers circa 1996 than social media today:

The term ‘‘interactive computer service’’ means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.

Yet courts have expanded the definition of what is considered an “interactive computer service” to include not just ISPs, but practically any website—an interpretation seemingly inconsistent with Section 230’s language concerning “the Internet and other interactive computer services.”

Much Section 230 jurisprudence has focused on two specific subsections: (1) § 230(c)(1) liability protection from distributing material created by third parties, save for unlawful activities such as intellectual property theft; and (2) § 230(c)(2) liability protection for actions “to restrict access to or availability of material” that might be harmful to minors or “otherwise objectionable.”

Courts have interpreted these liability shield components expansively, seemingly far beyond the “offensive material” concept in the statutory language. Under the current interpretation of Section 230(c)(1), if Jane Doe posts a message on a social media site or other “interactive computer service,” Jane is the speaker, not the website. This distinction matters for a variety of tort claims that may arise relative to Jane’s content. For example, if Jane defames a person via a tweet or a Yelp review, Jane—not Twitter or Yelp—would be held personally liable in the event of a lawsuit. Under Section 230(c)(2), platforms are not liable when they remove undesirable content from their platforms, including a wide array of items such as pornography and even hate speech.

The Electronic Frontier Foundation labels Section 230 as “the most important law protecting Internet speech.” This is quite the accolade for a section of a broader series of laws that largely sought to censor the Internet and were eventually deemed unconstitutional on First Amendment grounds following an ACLU lawsuit.

Moreover, Section 230 as it exists today is a fragile foundation to protect online speech. Despite years of caselaw expanding the reach of Section 230, a future court may ultimately narrow this reach via stricter review of the statutory text, based on either the definition of an “interactive computer service” or the statute’s apparent intent to censor online content in service of minors.

Additionally, a future court may insist that “interactive computer services” meet obligations under Section 230(d) in order to receive the liability protections of Sections 230(c)(1) and (c)(2). Section 230(d), inserted by Congress in 1998 after the CDA’s initial passage, states that:

A provider of interactive computer service shall, at the time of entering an agreement with a customer for the provision of interactive computer service and in a manner deemed appropriate by the provider, notify such customer that parental control protections (such as computer hardware, software, or filtering services) are commercially available that may assist the customer in limiting access to material that is harmful to minors. Such notice shall identify, or provide the customer with access to information identifying, current providers of such protections. (emphasis added)

It not clear that any online service meets these standards. Indeed, few people likely recall receiving such a notice when joining a social media website. We examined the “Terms of Service” for Facebook, Snapchat, YouTube, Twitter, Instagram, Yelp, and other platforms that appear to meet the statutory definition of an “interactive computer service.” None has a close resemblance to the requirements of Section 230(d). None references 230(d), much less provides “parental controls,” filtering services,” or “current providers of such protections” for minors. Apparently, either these websites are not “providers of interactive computer service” or they are not in compliance with 230(d).

Yet despite Section 230’s obvious fragility, it remains important for online speech. That Section 230 originated as part of a statutory packaged effectively intended to censor online speech is perhaps the height of irony.

Nevertheless, absent the twin liability protections of Section 230, the user content-driven Internet we know and love today might cease to exist. Websites like Twitter, Facebook, and even newspapers with comment sections would become flooded with litigation by aggrieved parties. To stop the onslaught of this litigation, armies of content moderators or opaque algorithms would need to be deployed, resulting in the posting of user content being delayed by hours, or even days or weeks—if it’s even posted at all. This would only serve to further exacerbate the concern over whether social media companies remain neutral forums for political and other controversial forms of speech.

Much has been made of the conduct of Twitter in recent weeks in the context of the Section 230 debate. Perhaps wanting to seem more like a neutral moderator than a censor, Twitter has increasingly refrained from outright removing content from its platform—particularly the speech of prominent political figures. Instead, it has taken the tack of labeling certain content as being in violation of its terms of service, promoting disinformation, or failing to pass the muster of a political “fact check.” Indeed, such labeling of one of the president’s tweets was the impetus behind his executive order in the first place.

In doing this, though, Twitter exposes itself to more, not less, legal liability. By marking users’ posts as false, misleading, or perpetuating disinformation, Twitter exercises its own speech, independent of its users. Thus, Section 230 does not, and should not, protect such comments.

While the bar for defamation of public figures is exceedingly high, the president, other politicians, or even individual citizens, displeased by the notion of being publicly branded a liar or propagandist, have and should continue to have every right to pursue legal action against such effective labels. And as a practical matter, this labeling is hardly a less intrusive interference with the free flow of ideas online. Indeed, in many ways—particularly given the ambiguity of what should or should not survive a political fact check—such labeling is a much more opaque and nefarious means of online censorship than simply removing users’ posts altogether.

Section 230 is praised by many yet understood by few. Its origin as part of a larger legislative packaged intended to sanitize the Internet for minors is perhaps the height of irony. This original intent may also put the future of Section 230 in peril, subject to the whims of future courts. Additionally, a court might find that services that do not meet Section 230(d) obligations are simply not “interactive computer services,” and thus are ineligible to receive Section 230’s twin liability protections. Few services meet those obligations today.

Ultimately, Section 230 should be reviewed by either Congress or the FCC to clarify its meaning, safeguard its protections for online speech, and strengthen it to withstand future review by discerning jurists.

Read in RealClear Markets