“Believe me,” said—well, not really “said,” but posted—Mark Zuckerberg. Raising Chico Marx’s old question, Who are you going to trust, me or your lying eyes?
Trust me again even though your eyes have seen how I abused it in the past is the essence of his plea. Zuckerberg feels our pain. And no doubt his own. Not because a few billion have been wiped off the value of his portfolio. He was going to give it away anyhow. In his telling the current furor is all about trust, and reputation, and we have no reason to doubt that his concern and regret are genuine, his solution the best that can be managed within the confines of the existing relationship between Facebook and the state. It is just that he is asking to have his reputation restored merely because he is belatedly scrutinizing app developers when the real problem is the business model of his creation—and public policy that has taken too long to adjust to the new world in which Big Tech has morphed into Big Media.
Bear with me, while I explain that business model, how it works, and why it presents a real challenge to those of us who shy away from regulatory solutions to problems such as the one Facebook creates.
The model is roughly this: Facebook needs more and more users to spend more and more time turning over more and more personal data to Facebook—which can then use this data to become more and more attractive to advertisers, who will pour more and more money into Facebook’s coffers so that they can target their ads with more and more precision.
To increase usage and therefore profits, Facebook offers its users not only an opportunity to interact with one another—to become what are called “friends”—but news to get them on the site and keep them there.
CEO Sheryl Sandberg says, “We do not sell your data.” Which is true in the limited sense that Facebook does not identify individuals. But their entire business model depends on selling your data, aggregated with those of others, to advertisers who pay handsomely for it because it solves a problem the industry has long faced: John Wanamaker once complained, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” By making it possible for latter-day Wanamakers to target defined groups, Facebook reduces the waste, which is why it now takes in over $40 billion annually in ad revenue worldwide, roughly three-quarters of that falling to its bottom line.
This edifice is built on the proposition that users find it worthwhile to surrender their data to Facebook in return for the service Facebook makes available to them, just as they surrender data to Google in return for access to its “free” search engine. As Zuckerberg correctly points out, users’ willingness to do that depends on their trust in him to handle their data with care. Rather like the bank customers who trust a bank to handle their cash prudently in return for a bit of interest on their deposits.
Those of us who believe that the market can correct most business misfeasances hope that when trust proves misplaced, customers will take their business elsewhere, punishing the badly performing company with a loss of custom and, if it’s severe, bankruptcy. You don’t like the way your dry cleaner is handling the clothes you entrust with him, go elsewhere: the matter is purely between you and him.
But we found with banks that if they misuse and lose a customer’s money, he cannot recover it in the absence of a government-created insurance mechanism. And also that the matter is not merely between the customer and the bank: The entire financial system is at risk of bank runs and other developments that affect society as a whole.
Fast forward to Facebook. A customer whose data is made available to a person or company other than Facebook, and sells it on to another for uses the Facebook customer did not intend, can never ever get his data back, any more than a customer of a failed bank could get his money back if he did not have the long arm of government on his side—the Federal Deposit Insurance Corporation, among other authorities—to supplement an invisible hand too weak to lift a finger in his defense.
Worse still, as with finance, where the consequences of bad behavior are not confined to those directly affected, so with Facebook. An unhappy user can withdraw from the system, but the damage done by the mishandling of his data is not merely a matter between him and Mr. Zuckerberg. The mishandling of data created a tool Facebook customers did not intend anyone to have: a pool of information that enables politicians to target them with appeals with which they might not otherwise have been burdened. In other words: tools sufficiently powerful to influence voting patterns.
And worse: What Zuckerberg concedes was a violation of the trust placed in him enabled bad guys to create unauthorized weapons not only for use in political campaigns, but also to undermine the cohesion of American society and the functioning of the democratic electoral system, confidence in it, and the accuracy of the information on which voters depend. Just as we regulate pollution of our physical environment when markets are not up to the task, we now must regulate pollution of our political environment when markets cannot cope with the technology available to bad actors in highly-concentrated markets.
This is not a problem that can be cured by a few or even the tens of thousands of defections from the Facebook system that have already occurred. (Even if one of the defectors is none other than Cher.) Any more than the problem created by a bank failure can be corrected by a flight of customers to another bank. It is the political system that is at risk, not the happiness of an aggrieved customer. And the problem has been compounded by the failure of the government to prevent Facebook from acquiring small companies with the potential of growing into significant competitive alternatives.
Which is why Zuckerberg is right to concede that some form of regulation might be necessary.
The first part of a multi-pronged solution would be to prohibit Facebook from making any acquisitions. It is difficult to tell which companies it wants to acquire might grow into competitors, as many feel Instagram might have become had it not been acquired by Facebook. Given all that we now know, it’s better to err on the side of allowing these potential competitors to survive as independents, than to permit Facebook to remove their threat to its market dominance. Yes, this would deprive such companies of financing by Facebook, but a shortage of venture capital for promising high-tech firms is not a serious problem: Good prospects can find funds elsewhere, although not at a price that includes the premium Facebook is willing to pay to stifle incipient competition.
Second, require Facebook to stop making its platform available to app developers. Simply as a logistical matter, Zuckerberg’s promise to “restrict” such access is difficult to enforce—witness the massive resources he is prepared to devote to trying to enact such a restriction. Sandberg, in her television interview, repeatedly cites “bad actors,” who will always try to abuse the system. Which is true—and all the more reason to close the app entry door to the Facebook database.
But bad actors did not make Facebook available to app developers; they did not ignore the probability that some app developers might sell the personal data to an unauthorized buyer; they did not ignore warning signs that such abuse was occurring. Experience shows that Facebook cannot control the use to which developers will put to the data they get—not only on subscribers to their app, but the friends of the friends of the friends of those subscribers, whose permission for such a release was not sought. And the Zuckerberg-Sandberg assurance that if they discover an app developer to be abusing the system, they will make that transparent, cannot retrieve the data. Harm done by unauthorized use of the data is irreversible.
Third, and most important, recognize that Facebook is a media company, a position Congress inched towards last week when it when it decided that the protection from liability that internet companies enjoy would not apply to sex trafficking. Media companies—newspapers being the most prominent example—are responsible not only for the safety of the presses on which they print news, but for the content they choose to print.
Expand what Germany and the Congress have already done, and you have something of a solution to the Facebook problem: In Germany, Google and Facebook face fines of up to about $60 million if they fail to remove hate speech within 24 hours of its appearance on their systems. That provision could be expanded to any form of “speech” that would be subject to legal action if it appeared in a newspaper. And Congress’ decision to remove the protection Facebook and others now enjoy from liability for what appears on their systems when it comes to sex trafficking can be extended to all content. Yes, this makes the companies censors of a sort. But no more so than other media companies which are responsible for the content of what they publish.
Sandberg counters that Facebook is big tech, not big media, because it employs engineers, not journalists. But this distinction ignores what one prominent participant in Silicon Valley’s business world calls an inconvenient truth: Some two-thirds of Facebook’s two billion users rely on it as a news source according to Pew Research; millions look to Twitter for breaking news; and Google provides news updates and other content. As Steve Kovach, correspondent at Business Insider puts it, “Facebook’s news feed has become what the front page of the newspaper was for older generations.” If it operates like a media company and is read like a media company, it is likely a media company. And as such, it should be responsible for the news it carries. There is a tale told that Mark Zuckerberg consulted Rupert Murdoch for idea on how to control what goes out on Facebook. Murdoch, who knows a thing or two about such matters, is said to have replied, “We call it journalism. We call it editing.”
Finally, it is worth considering whether Facebook, Google and others are not illegally tying access to their service to access to your data. Perhaps it would be possible to start with the proposition that you own your data, and proceed from there to a rule requiring these companies to offer their service for a fee if you retain data ownership, and free if you “pay” with your data. This would not be easy, as the fee charged would have to be regulated, lest it be set so high as to make the choice a sham. But such an option is certainly worth considering.
Are there important objections to the regime I am suggesting? Certainly. Any time government gets involved innovation slows. Any regulation will have unintended consequences. But the comparison should not be with the idealized, libertarian world so beloved of Silicon Valley, which favors regulation for thee—you producer and user of fossil fuels, manufacturer of SUVs—but not for me. It should be with the real world we face today.
Almost 90 years ago a critic of newspapers when they were the main source of news accused them of “aiming at . . . power without responsibility—the prerogative of the harlot throughout the ages.” It surely is a legitimate role of government to prevent Big Tech from achieving a similar prerogative.