Google’s attempt to “demonetize” The Federalist has been roundly criticized from a variety of perspectives, including antitrust, free speech, and the notion that it reeks of political correctness run amuck. But the real problem with Google’s action here is that it strikes at the heart of the fundamental structures that support the Internet as we know it today, and emboldens those who seek to erode them.
The Internet is perhaps the freest and most efficient market in history. On the Internet, users can discover entertainment and information through search engines and on social media, and businesses can speak to customers through advertising—particularly with advertisements targeting individuals with specific characteristics or preferences. Providing information about potential customers to advertisers is ultimately the business of Google and other advertising agents and brokers.
Advertisers are solely interested in the characteristics of the potential customers visiting a website, not those of the website itself. Google and other information aggregators track all manner of information about individuals, including demographic profiles, Internet search histories, and so on. These aggregators can and do sell information about potential customers to advertisers, and that information is valuable. But information aggregators do not collect, much less sell, information about the political leanings of websites.
Moreover, advertisers are rarely concerned about the political views of potential customers. While some boutique businesses do target liberals or conservatives, the vast majority of businesses do not, because it is unlikely they would survive if they did. An individual who recently conducted an online search for new Jeeps would be of interest to local Jeep dealerships. A 22-year-old who had belonged to a college fraternity might be of interest to beer advertisers. But in neither instance would the person’s political persuasion be relevant.
However, advertisers are concerned about unambiguously unlawful, hateful, or otherwise offensive content seemingly condoned by websites. In 2017, Google was embarrassed as many major advertisers pulled advertisements from Google-owned YouTube. At the time, YouTube was filled with racist, antisemitic, and other hateful videos. The advertisers were not concerned with comments from random users, but instead with videos that YouTube allowed to be uploaded, and prior to which their ads ran.
In the years that followed, Google continued to pull substantial hate-related and pirated content from YouTube, but some hate-filled video channels remained in place. Needless to say, Google has not blocked advertisers from its subsidiary YouTube, nor has it sent YouTube threatening emails about the matter.
The challenge for YouTube remains the technological difficulty of monitoring the content of video uploads. Google has some of the most sophisticated search and filtering technology in the world to identify stolen intellectual property and other unlawful content in videos users upload. These videos are not anonymously uploaded, and Google can take down unlawful content, as well as notify or outright ban the accounts of the parties that upload them. The threat of YouTube removing one’s entire channel from its platform is a powerful incentive for video creators that may be earning large revenues from an existing video library—a threat that simply does not exist for YouTube users who passively watch content.
Thus, more problematic to monitor are comments, written in a few seconds, often much more anonymously. YouTube allows comments, and so too do many commercial websites. There are at least two broad categories of challenges in monitoring for hate speech or derogatory comments. One is the inherently subjective nature of identifying such content, particularly for public figures. If a statement such as “Hate Donald Trump” is hate speech under Google’s definition, then the public comment sections of most websites would contain hate speech. Second is the ease with which commenters intent on hate speech can circumvent most filters. For example, switching a letter in a word with a symbol can leave the word still intelligible even if incorrectly spelled.
Most websites in America—certainly including YouTube—are not immune from some forms of hate speech in their comments. Yet advertisers have not demanded that their ads be pulled from websites with offensive comment sections. Thus, it is surprising that Google targeted The Federalist—and apparently no other website—with a tweet stating “Our policies do not allow ads to run against dangerous or derogatory content, which includes comments on sites.” This begs the question—of all of the websites in America with potentially objectionable public comments, why was The Federalist targeted?
Google and other technology companies often trumpet the importance of Section 230 of the Communications Decency Act, which shields interactive computer services from liability for the content of uploaded content. And indeed, absent Section 230 protections, much of the user content-driven Internet we know and love today would be precluded by fears of endless litigation.
Ironically, though, Google’s threat to The Federalist suggests that those protections are unnecessary, and that websites such as The Federalist and YouTube should be able to adequately self-police user content without fear of legal liability. Large websites like YouTube, able to hire and deploy the best engineers and software in the world, might well survive absent Section 230 protections, setting aside the harm to user speech. But small websites like The Federalist lack the expertise and financing for such systems.
By targeting The Federalist and suggesting it can adequately police comment sections for hateful content, Google is unwittingly supporting the efforts of President Trump, Senator Hawley, and others to weaken Section 230. Worse still, Google imperils—via the threat of demonetization—small websites, and consequently the Internet at large.
Read in RealClear Markets