No, Mr. Zuckerberg, it’s not about free speech. It’s about your propaganda machine.

Facebook’s touting of free speech is deceptive. They want you to forget how their platform works.

In a recent speech, Mark Zuckerberg, CEO of Facebook, claimed that their promotion of known lies in political ads is “something we have to live with” because of Facebook’s devotion to the principle of free speech. He cast this decision in the same light as the civil rights struggles of Martin Luther King Jr. and Black Lives Matter.

A couple weeks later, he informed Congresswoman Alexandria Ocasio-Cortez at a hearing that the reason they do not fact-check political ads is that they “want people to see that the politician has lied”—thus, once again touting free expression as the animating principle behind their policy.

Nice.

That’s one way to explain away Facebook’s bumbling, incoherent, opaque, and self-serving rules about content moderation on their platform.

In reality, free speech has little to do with Facebook’s platform and business. Their constant touting of freedom of expression is little more than misdirection, the sort magicians use to get the audience to “look over there!” while the real magic is happening away from view.

Why do I say that Facebook has little to do with free speech? Here are the reasons, from the most obvious to the most subtle.

Facebook is not the government

The First Amendment only prohibits the government from abridging the freedom of speech of citizens, not private companies, which is what Facebook is. So the First Amendment has nothing at all to say about whether Facebook should moderate content or not; and if they do, what content they should let through. In fact, the First Amendment applies to Facebook in a reverse way: the government cannot abridge Facebook’s right to moderate their content as Facebook sees fit.

Section 230 does not require free speech for users

Fine, many people say, the First Amendment does not require Facebook to provide a platform for all users, but Section 230 of the Communication Decency Act does.

This, too, is incorrect. Section 230 is often nicknamed the “First Amendment of the Internet” and is widely credited as being the special dispensation that allowed the Internet to exist in the form it does today. Perhaps this is why people have misunderstood it to mean that Section 230 requires tech platforms to respect the First Amendment of users.

But this is the reverse of the facts. In fact what Section 230 does is protect Internet companies from liability for third party content they host, whether or not they choose to moderate that content. Effectively, Section 230 is the First Amendment for Internet companies: the government cannot abridge their right to moderate their content as much or as little as they wish. It is their choice.

Governments do not profit off of free speech. Facebook does.

Many people have made the argument that since Facebook is so big, it functions as a quasi-government, perhaps even a multinational one, like the United Nations or the European Union. Facebook itself promotes this notion. They recently created an oversight board nicknamed the “Supreme Court“. They have wanted to enter the currency market with Libra, that governments have seen as a direct challenge to their ability to regulate transactions. Their Free Basics product that gives free internet to under-served populations, that activists deride as a “walled garden”, was initially released as Internet.org, with a “.org” domain that suggested it was a non-profit.

Is it any surprise then that they want to appropriate to themselves a principle normally reserved for governments—free speech? (Apropos of nothing, Mark Zuckerberg is said to have a fascination with Augustus Caesar, to the point where his haircut is styled like Caesar’s.)

But here’s the most obvious thing wrong with this argument: governments do not profit off of the speech of their citizens. Facebook does.

Governments get their legitimacy from being accountable to their citizens. If a government guarantees the right of free speech to its citizens, it is adopting a constraint on itself—it is promising to not punish dissenters.

While Facebook, on the other hand, is a profit-making company that like others of its kind, is accountable only to its shareholders, not to its user-citizens. If it were a government, we would call it corrupt: that is the word we generally apply to governments that are run by profiteers who make national interest decisions based on lining their own pockets.

Regardless of its high-minded rhetoric, Facebook has always made decisions that enhance its profitability. The thing to keep in mind here is that Facebook’s business is built on engagement with its content. The more the content, and the fewer the restrictions on it—the more eyeballs and clicks, and more user data to package as “insights”/Custom Audiences to its advertisers.

Therefore, while “free speech” for a government is a constraint on itself, “free speech” for Facebook is simply good business.

Facebook doesn’t just host speech, it amplifies it

Facebook is not a neutral platform. It has never been. It is a publisher.

This is the core reason why equating Facebook’s service with “free speech” is a misunderstanding at best, and deliberate deception at worst—I leave it as an exercise to the reader to figure out which.

Free speech is a negative right. It’s right there in the First Amendment: government shall not abridge the freedom of speech. Nothing about free speech guarantees your right to be heard.

Facebook isn’t, and never was, a passive host for third party content. As part of its core design, its algorithm amplifies the content that it deems will get the most attention from the user.

Sheryl Sandburg, Facebook COO (source: NY Post, Ella Pellegrini)

Recommendation algorithms are pretty nihilistic. They do not differentiate between posts that get a lot of clicks because they are divisive and inflammatory, and posts that get a lot of clicks because they showcase cute kittens. They cannot tell a well-reported journalistic piece that goes viral because it is a genuine scoop, from a fake news article spun out in a few minutes by the proverbial 400 pound guy sitting on a bed, with made-up inflammatory “facts”.

Nonetheless, they form educated guesses about users and their personalities: if you tend to be a gullible person who regularly clicks on fake news articles, it will feed you more of the same. If your Facebook friend tends to be a sophisticated news consumer, that friend will be none-the-wiser that while their feed is informing and educating them, you are descending into a hermetically-sealed Potemkin village of entirely fake news.

Facebook has striven to push the notion that no one should think of Facebook as a gatekeeper of news. We merely show you what there is, they say. Sorry, but Facebook’s algorithm is, and has always been, a gatekeeper.

Facebook’s algorithm performs the same function as other publishers: choice. Any publisher takes the vast universe of possible content and decides what is worthy of your attention. They are judged for the content that is published under their imprimatur. This is the accountability we ask of any publisher, that they stand behind their choices.

A book publisher who does not want to be known as spinning conspiracy theories will stay away from publishing conspiracy theorists. An art gallery will scrupulously judge what art has merit before showcasing it. A radio disk jockey will imbue their selections with their personality. There is not a publisher in the world who is so promiscuous as to consider “free speech” as a reason to publish someone.

Now since it isn’t a human acting as Facebook’s publisher, but rather, an algorithm, they have pretended that the publishing function isn’t happening at all.

But it is.

Your Facebook newsfeed takes the vast universe of bytes posted to their servers to show you a very small selection based on what it thinks you are most likely to engage with. This is a form of publishing. Like other publishers, Facebook must stand behind the choices made by their algorithm, and stop pretending that such choices were never made.

The solution to objectionable speech is more speech, except when it is Microtargeted

In the recent feud between Aaron Sorkin (creator of the film The Social Network) and Mark Zuckerberg, Zuckerberg shot Sorkin’s own words back at him: “America isn’t easy….You want free speech? Let’s see you acknowledge a man whose words make your blood boil, who’s standing center stage and advocating at the top of his lungs that which you would spend a lifetime opposing at the top of yours….”

What a glorious image! Who wouldn’t want to belong to this raucous, fractious democracy that calls on the best enlightenment principles from its citizens?

Sadly, Facebook does not resemble that remark.

“Speech” from its “citizens”—often money-paying ones—happens in secret little rabbit-holes, not on center stage where others can hear and refute objectionable speech with more speech of their own. Such lies are hidden by design.

The current wave of criticism started with their decision to permit politicians to lie in paid political ads on Facebook. One of the most cogent critiques of this policy came from within the company itself, from a letter of dissent signed by 250 employees. I will let them explain:

Microtargeting is at the core of Facebook’s business. In short: as you click on, like, watch videos, or otherwise engage with posts, Facebook is learning what sort of person you are; based on their knowledge of other people like you in their 2.5 billion userbase, they can accurately judge what sort of ads you will react to. This is an invaluable tool for advertisers, as you can imagine. No need to waste ad dollars for say, sports equipment, on cable channels where millions will see the ad, but only a tiny minority will want to purchase it.

The power of microtargeting political ads is huge, especially if combined with voter rolls, and even more so if coupled with deliberate lies. Gullible religious voters can be shown ads with the lie that the opposing candidate supports banning Christianity. Middle class, frugal voters can be shown lies that the opposing candidate plans to raise taxes on families like theirs. Voters with sick children can be lured with a lie that the opposing candidate plans to do away with their health insurance. And so on.

Now we can see clearly why Zuckerberg’s excuse for permitting lies in paid political advertising is so phony. To remind, what he said was that they permit this as a matter of policy because they want users to see that the politician has lied. But:

  • Because political ads are exempted from fact-checking, the lies shown to users are not called out as lies. Far from users seeing that the politician has lied, they are led to believe that the lie is the truth.
  • Because of the news silos that are created by Facebook’s algorithms, users who see a certain lie may never be exposed to any other source of information that informs them of the facts. This is not the raucous public square of Zuckerberg’s imagination in his response to Sorkin. It is small, hidden, and secretive: an invaluable tool of propaganda.
  • This is Facebook’s policy worldwide. Which means that in countries without a robust, free press, Facebook is essentially acting as a handmaiden to authoritarian governments. Politicians have always lied; but through Facebook their lies can be surgically directed at the most vulnerable populations, with no hope of ever being corrected.

Lies are more powerful than the truth

As any expert in propaganda will tell you, lies go farther, faster than the truth. Lies are easy; limited only by one’s imagination, and designed to grab attention. The debunking of those lies has to be painstaking, based on real research and investigation, and often can only be performed by journalistic organizations backed by well-funded reporters. The debunking is often much less glamorous and much more nuanced.

Any information service that is built on engagement has to contend with this basic notion.

The precious resource here is not people’s ability to lie. Whether you are intent on lying, or intent on telling the truth, the First Amendment means that no government body will come in to stop you. There are plenty of avenues for a grand and raucous cacophony of all sorts of opinions being offered on all sorts of platforms: virtual, in print, or in meat space.

The precious resource is attention. It is eyeballs-on-content. Is your lie or truth told in a vacuum—or do millions of eyeballs see it, and millions of clicking fingers propagate it? This is the precious resource that massive social media platforms like Facebook control, just as surely as OPEC controls oil reserves. No First Amendment guarantees the right of one person’s version of the facts to be prioritized above another’s.

Lies have another advantage: they don’t even have to be consistent with each other! This is another lesson that experts in propaganda teach us: it is enough to flood the zone with content in order to devalue the truth. If people hear constantly contradictory narratives, many will simply give up adjudicating the facts and resort to judging on emotion instead.

Given the inherent asymmetry between the power of lies and the power of truth, any monopolizer of attention like Facebook has a unique role. If they are as nihilistic as Facebook wants to be, they are doing more than merely permitting lies: they are enabling, and being responsible for their spread.

Societies where truth and facts are devalued become ripe for fascist takeovers by empowering the worst among their leaders. Attention monopolizers like Facebook bear an awesome responsibility: if they are nihilistic about facts and non-facts, and spread both equally, propaganda will flood the zone, as it is designed to do—and win. They may not have initially told the lies, but they are responsible for spreading it. It is time we stop buying their sophistical excuses and hold them responsible.

Follow me on Twitter at @TheOddPantry and on Facebook at The Odd Post.

(Featured image credit: Mandel Ngan, AFP via Getty Images via Rolling Stone)

Don’t let them off the hook: Why Social Media companies are responsible for fake news

Why social media is responsible for our fake news crisis.

We will talk about social media, I promise. But I want to first tell the story of Dan Rather’s fall from grace.

Dan Rather spent decades as the CBS Evening News anchor, one of the big three for nightly news. In 2004, on 60 Minutes Wednesday, he reported on the Killian Documents: a series of memos that were critical of George W Bush’s service during his time in the Air National Guard. It turned out that there were many reasons—including the use of modern typeface—to doubt the authenticity of the memos. Eventually, CBS News recanted the story, fired the producer, and forced Dan Rather to move up his retirement. The entire episode came to be known as RatherGate.

A highly respected decades-long career in news was capped by a “-Gate” because of lack of devotion to fact-checking. Of course no one believed that Dan Rather’s team forged memos themselves. Instead, what people were objecting to was their lack of editorial judgment. Given their giant megaphone, they had the responsibility, as a publisher, to be a gatekeeper for factual news.

Platform or publisher?

Why, then, do we not hold social media companies to the same standard? Why do we let them get away with spreading fake news, propaganda, conspiracies, and hate speech through their platforms?

At heart, I would argue, is a semantic confusion. We generally don’t hold pure platforms accountable for the content that is carried on their wires. For instance, if I was to receive a death threat over my cell phone, not for a minute would I think to blame my wireless carrier, Cricket. The fault would lie with the threat-maker, alone.

On the other hand, we can and do beat up on newspapers, cable shows, even bookstores, that carry objectionable content. As publishers, we expect them to have volition and exercise choice.

Social media executives have blithely shape-shifted from one form to another, depending on whether being a publisher or a platform suits them better in the moment. It is to their advantage that we remain confused.

On the one hand, they would love to be thought of as a public square, where absolutely everyone and anyone would be permitted to exercise their free speech, regardless of how savory their speech is. This permits tech companies to sidestep any shred of responsibility for what these third party interlocutors say, much like the wireless carrier in my example of a person making death threats over the cell phone.

On the other hand, as they have gobbled up more and more of the business of local news, they have come to define what public discourse even is. A public square is a grand and noble thing. But one thing it does not do is give some people a megaphone, and attenuate others. Social media does this daily, based on their currency of choice—clicks.

Thriving on our confusion, tech executives have been able to argue simultaneously for both kinds of dispensation. For example, in a recent interview with journalist Kara Swisher, Facebook CEO Mark Zuckerberg made the case for hosting known conspiracy theorists such as InfoWars and Holocaust deniers, based on his conception of Facebook as a neutral utility. At the same time, his lawyers were arguing in court that since they are a publisher, they have a first amendment right to make editorial choices that cannot be questioned.

Journalist Carole Cadwalladr faced a similar blank dismissal from Google executives when she informed them that their platform was sending people searching for phrases like “Jews are…” and “Hitler was…” into conspiracist rabbit holes. Pure platform, move along, they said, washing their hands clean. But Google/YouTube have shown themselves perfectly capable of exercising their editorial finger when public outcry becomes loud enough—for example in their recent deplatforming of Alex Jones and InfoWars.

The truth is, extremism on their platforms lights up the click-o-meters and lines their pockets. They have no interest in policing their platforms or in us sorting out our confusion. But it is, actually, possible to think through the issues clearly and hold them to account.

The Kinkos analogy

The classic analogy offered by the “platform” pleaders is that of a print & copy center, say, called Kinkos. Imagine a person who goes to Kinkos and prints out a stack of child pornography to distribute in town. Since child pornography is not one of the forms of free speech protected by the First Amendment, our imagined customer is clearly breaking the law by possessing and distributing it. But is Kinkos?

Kinkos is a neutral platform and cannot be blamed. Customers who walk in are not screened for the content of what they want to print, nor would it be scalable to have Kinkos personnel look over what goes into the copy machine before it goes in.

There is much that is similar about social media companies. No Facebook or YouTube employee looks over your shoulder as you post. Nor do you have to apply to an editorial board before your post shows up. But there the similarity ends.

When you make your copies at Kinkos, you walk away with a stack; and there ends the relationship between Kinkos and your content, whether you were printing holiday cards or our less savory example above.

Of course, such is not the case with social media. For them, the content you post is the gold that they woo you for. Once it’s on their platform, it drives engagement. A recipe video you create might show up in YouTube’s recommendations. A linked news article might get thousands of likes and thereby show up in tens of thousands of Facebook News Feeds. A striking meme or a witty comment might travel far and wide among Instagram’s global users. Or your post might languish, with not a like or a comment on it, and thus not be very much seen.

A platform and a publisher

If you are watching carefully you would have spotted both functions right there. Social media companies play the passive role of a platform while they merely host third party content. But they play the active role of a publisher when they affirmatively push selected content to your feed. There is editorial judgment in which posts are heavily promoted and which others languish without any eyeballs on them. Who does this? Who is the editor that determines virality, based on what characteristic of the content?

We will get to that. But let’s work with the Kinkos analogy further. Imagine that copies of the prints you made at Kinkos stayed with them. Each morning, they picked a small subset of all the millions of pages that were copied at their nationwide offices the day before, to put into a newsletter. Recipes, memes, news articles, clever bon mots, and other selected content appeared in your mailbox daily, neatly formatted.

Would you then be so sanguine if they delivered child pornography in their curated newsletter? What if unbeknownst to you, Kinkos was delivering child pornography to the pedophile next door, while delivering bomb-making instructions to the psychopath across town, or delivering misinformation about vaccines to the frantic new mother across the street? Or if they delivered completely made up “news” that the candidate running in your district was prone to eat babies the day before an election?

Would we then forgive this imagined Kinkos—nothing at all like our Kinkos on Earth One—for their newsletters? No doubt they could plead that it wasn’t them that wrote up the fake news or the misinformation about vaccines—it was third party content from their customers. “We only chose the articles that we spotted people copying most avidly at our copy centers!”—they might say.

Nonetheless, they’re the ones who chose to distribute it. We would laugh them out of town if they shrugged their shoulders and professed helplessness at the content of their newsletters, the way social media executives regularly do.

When Algorithms do it, does it count?

I trust that you see the absurdity of executives pleading helplessness in the Kinkos thought experiment. You probably imagined a human, a Kinkos employee perhaps, picking out objectionable content to put in their newsletter. But of course, there are no human fingerprints on the content that social media companies push to your feed. Those choices are made entirely by algorithms.

Facebook’s news feed, their suggested pages and groups, Amazon’s virtual shelves, and YouTube’s recommended videos, are all highly curated lists; the contents of which are produced by exquisitely crafted algorithms, that these companies treat as their crown jewels. The teams that craft them are just as integral to the tech companies’ businesses as the editorial team that determines the stories in a newspaper is to a newsroom. So if it sends fake, nonfactual news your way, that represents a failure of its editorial function just as much as the Killian memos represented a failure of Dan Rather’s team at CBS News.

It is subtle, but you can tell that each time there is a scathing news article about propaganda promoted through social media, executives point to their algorithms, as if to absolve themselves. As if to say, “they did it, not us!”

Source: NTSB via ArsTechnica.com

Last year, an algorithm-controlled self-driving Uber car ran into a 49-year-old pedestrian crossing in front of it and killed her. The NTSB investigated the accident. While Uber waited for the results, they took all their self-driving cars off the road in several cities for nine months. Prosecutors considered charging Uber with a crime. The event caused a deep reckoning in the autonomous car industry and a widespread understanding that the technology is not ready to be put on the road yet.

Industry after industry has faced automation where software now does the job that humans used to do. But no one would accept errors made by software as any more forgivable than if the error was made by a human.

Whether Facebook’s news feed, YouTube’s “watch next” list, etc., are put together by algorithm or human editors, they represent just as much of a failure when they send fake news your way. A recent study found that Facebook was the biggest disseminator of fake news in the months leading up the the 2016 election. This article pins the percentage to 22%: almost a quarter of all the fake news spread in the months before the 2016 election was spread by Facebook. But that woefully understates the problem: the truth is, for many voters, the entirety of their Facebook news feed was fake.

The problem is global. Investigative journalist Maria Ressa of the Philippines has called Facebook the “fertilizer of democratic collapse” in her country due to the fake news spread through it. Myanmar’s military instituted a propaganda campaign through inauthentic pages and articles on Facebook that led to genocide against the country’s Rohingya Muslims.

To be clear, the problem isn’t, and has never been, merely the fact that social media companies merely host objectionable content on their platforms. The problem, rather, is that they selectively pump the filth out.

An incomplete Business Model

There was a time in America when rivers frequently caught on fire. Just the Cuyahoga River alone, that runs through Cleveland and empties into Lake Erie, caught on fire fifteen times during the Gilded Age: the age when barons like John D. Rockefeller were transforming the Midwest by creating industrial behemoths like Standard Oil.

Of course, as historian John Grabowski has said, “technically, rivers never burn. It’s the crap on them.” Indeed, companies throughout the fast-industrializing Midwest didn’t have much of a plan for managing their waste. They just dumped it all into the river to the point where it “oozed rather than flowed”.

Cuyahoga River fire (source: Cleveland State University Library via alleghenyfront.org)

For decades, people lived with what Cleveland’s mayor called the “open sewer through the center of the city” because they assumed that a polluted commons, such as the river, was simply the cost of doing business. These companies were creating jobs and wealth; if they were forced to take on the cost of managing their refuse—a cost that they simply hadn’t considered while they were growing their businesses—wouldn’t they simply leave?

Spoiler: they didn’t. It took the combined forces of the environmental movement, the 1969 Cuyahoga River Fire going on the cover of Time, and a willing government: but we got the Clean Water Act, which made companies responsible for their own pollution of common waterways. Now, refineries build the cost of managing effluents into the cost of doing business (though they might chafe against it). Standard Oil did not do this of their own accord. We made them.

How it came to this

The root of the issue for social media companies is that their algorithms were not built for fact-checking. There is a huge cost (to them) of performing this role, a role that their businesses were not built to handle. They are not going to curb their pollution of our information flows unless we make them.

Tech companies often give the impression that they blundered into news-publishing absent-minded, without any forethought. Their businesses are built for engagement with a different sort of content. The more they can get you to click, follow, comment on, and buy, the better they do. This is what their recommendation algorithms were essentially built for—to maximize eyeball time.

Recommendation algorithms were created right around the turn of the century, when digital content businesses were starting to host such massive inventories that navigating it would have been all but impossible without algorithms acting as your personal guide to help you find content that you might like. Pandora, the internet radio station, was one of the first businesses that ran on such algorithms. It married together the characteristics of songs with your stated preferences, and the preferences of other users who shared your tastes, to create an extremely popular customized radio station. Amazon’s inventory of products is so large that without their algorithm-powered “Recommended for you” shelves, you’d be lost; with them, you might buy a few. 400 hours of video are uploaded to YouTube each minute; without their “Recommended” and “Watch next” lists, you wouldn’t have a hope of finding anything that you’d like to watch.

Such algorithms have incredible reach. They have allowed the businesses that they power to achieve huge market shares in whatever niche they inhabit. They are built to be democratic down to their very bones, like the world’s most solicitous concierges, who give you whatever content you might enjoy with no judgment. This is appropriate when it comes to matters of taste. There should be no judgment about the movie you choose to watch or the soap you choose to buy.

But once these businesses had achieved dominance over internet eyeballs, they found themselves in a new business entirely. They also became purveyors of factual information and news. This was already a red flag but nobody noticed. Certainly not the barely-out-of-adolescence tech CEOs who saw the news business as yet another arena for their engagement-driven rapacity for eyeballs, like movies or songs or post from friends.

But clearly this doesn’t work for factual content. An anti-vax article might gain more likes than an article debunking it, but that doesn’t make it science. A holocaust denial video might get more views than a documentary, but that doesn’t make it history. An article about a politician who eats babies might get thousands of outraged comments but that doesn’t make it news.

It was a category error. Because a democratic “give-them-more-of-what-they-want” attitude seems to work for recommending movies, they thought it was also appropriate for deciding what counts as news.

This key difference between types of content has simply not been understood, nor appreciated, by tech executives, most of whom are engineers at heart. As journalist Kara Swisher often chides them, “take a humanities course, for God’s sake!”

Chafing against

Much like oil refineries back in the day, social media companies have chafed against taking responsibility for the content their algorithms push into the ether.

Sometimes they take cover under the notion that since it isn’t their content, they own no responsibility for it. YouTube executives explicitly made this argument when they decided to allow conservative firebrand Steven Crowder to remain on their platform even though he had used his channel to unleash homophobic attacks at journalist Carlos Maza. “Even if a video remains on our site, it doesn’t mean we endorse/support that viewpoint,” they said.

I’d like to remind tech executives that vectors count just as much, if not more, as that which is being propagated. We have a deep cultural association of rats and fleas with the Bubonic Plague that swept through Europe in the 14th century, but rats and fleas were merely the vectors—the plague was caused by a bacteria. Sparks in forests do no harm unless there is dry brush, in which case one has a wildfire.

Photo: Jim Watson/AFP/Getty Images

At other times public outrage at their insouciance is treated as a PR problem that will go away with proper “handling”, rather than as a concrete problem to be solved. Two of Facebook’s top executives have tried two different such PR approaches: Sheryl Sandburg, famously, tried to handle public outrage at Facebook’s fake news problem by doing a hit job against the main complainers. Mark Zuckerberg usually goes the opposite route: apologize profusely over and over, while still thumbing his nose at a panel of nine countries trying to hold him accountable.

Beaten down enough, tech executives sometimes turn around to blame us, the user community. This is clever, because of course recommendation algorithms use our own clicks as raw material to create more recommendations. But this gets to the core of my argument above. When it comes to matters of fact, the number of clicks on the article tell you nothing. As a mere consumer of news, I can have no independent information about whether a particular article is factual or not; it is the job of the publisher to not feed me garbage. No one asked social media companies to swallow up and destroy local media that was attempting to play the role of factual gatekeepers, however imperfectly; they bigfooted their way in and snatched this job. But it takes some gall to swallow up a business while disdaining to perform the role that it was performing.

Whether it is the pleading from activists, the outrage from users, or the fear of being regulated, social media companies have finally understood that fake news is a problem for them. Even as they have begun to devote resources to ensuring information hygiene, their efforts often come too late or are insufficient. After the Parkland mass shooting tragedy, YouTube finally took steps to remove conspiracy videos showing survivor David Hogg as a crisis actor, but not before it had already achieved No. 1 status. Facebook has been forced into having some human moderators; but rather than straightforwardly hire them, Facebook chooses to rent poorly paid, abysmally-treated contractors. But when it comes to fake news about themselves, Facebook is able to find the resources to do the job properly.

Social media executives tend to have a great deal of missionary zeal about the ultra-democratic, communitarian world they are creating and tend to treat these problems as the breaking of a few eggs on the way to a glorious omelette. But I’d like to remind them that the freely-polluting industrialists of the 19th century were also transfixed by their own virtue. They, too, thought of their own pollution of the commons as a unfortunate side-effect of the new world they were creating. They themselves, of course, moved their families uphill, away from the burning river, much like Silicon Valley executives often keep their own children away from their platforms. But can tech company executives thrive while democracy around them is collapsing, driven by their pollution of the commons? We will soon find out.