Don’t let them off the hook: Why Social Media companies are responsible for fake news

We will talk about social media, I promise. But I want to first tell the story of Dan Rather’s fall from grace.

Dan Rather spent decades as the CBS Evening News anchor, one of the big three for nightly news. In 2004, on 60 Minutes Wednesday, he reported on the Killian Documents: a series of memos that were critical of George W Bush’s service during his time in the Air National Guard. It turned out that there were many reasons—including the use of modern typeface—to doubt the authenticity of the memos. Eventually, CBS News recanted the story, fired the producer, and forced Dan Rather to move up his retirement. The entire episode came to be known as RatherGate.

A highly respected decades-long career in news was capped by a “-Gate” because of lack of devotion to fact-checking. Of course no one believed that Dan Rather’s team forged memos themselves. Instead, what people were objecting to was their lack of editorial judgment. Given their giant megaphone, they had the responsibility, as a publisher, to be a gatekeeper for factual news.

Platform or publisher?

Why, then, do we not hold social media companies to the same standard? Why do we let them get away with spreading fake news, propaganda, conspiracies, and hate speech through their platforms?

At heart, I would argue, is a semantic confusion. We generally don’t hold pure platforms accountable for the content that is carried on their wires. For instance, if I was to receive a death threat over my cell phone, not for a minute would I think to blame my wireless carrier, Cricket. The fault would lie with the threat-maker, alone.

On the other hand, we can and do beat up on newspapers, cable shows, even bookstores, that carry objectionable content. As publishers, we expect them to have volition and exercise choice.

Social media executives have blithely shape-shifted from one form to another, depending on whether being a publisher or a platform suits them better in the moment. It is to their advantage that we remain confused.

On the one hand, they would love to be thought of as a public square, where absolutely everyone and anyone would be permitted to exercise their free speech, regardless of how savory their speech is. This permits tech companies to sidestep any shred of responsibility for what these third party interlocutors say, much like the wireless carrier in my example of a person making death threats over the cell phone.

On the other hand, as they have gobbled up more and more of the business of local news, they have come to define what public discourse even is. A public square is a grand and noble thing. But one thing it does not do is give some people a megaphone, and attenuate others. Social media does this daily, based on their currency of choice—clicks.

Thriving on our confusion, tech executives have been able to argue simultaneously for both kinds of dispensation. For example, in a recent interview with journalist Kara Swisher, Facebook CEO Mark Zuckerberg made the case for hosting known conspiracy theorists such as InfoWars and Holocaust deniers, based on his conception of Facebook as a neutral utility. At the same time, his lawyers were arguing in court that since they are a publisher, they have a first amendment right to make editorial choices that cannot be questioned.

Journalist Carole Cadwalladr faced a similar blank dismissal from Google executives when she informed them that their platform was sending people searching for phrases like “Jews are…” and “Hitler was…” into conspiracist rabbit holes. Pure platform, move along, they said, washing their hands clean. But Google/YouTube have shown themselves perfectly capable of exercising their editorial finger when public outcry becomes loud enough—for example in their recent deplatforming of Alex Jones and InfoWars.

The truth is, extremism on their platforms lights up the click-o-meters and lines their pockets. They have no interest in policing their platforms or in us sorting out our confusion. But it is, actually, possible to think through the issues clearly and hold them to account.

The Kinkos analogy

The classic analogy offered by the “platform” pleaders is that of a print & copy center, say, called Kinkos. Imagine a person who goes to Kinkos and prints out a stack of child pornography to distribute in town. Since child pornography is not one of the forms of free speech protected by the First Amendment, our imagined customer is clearly breaking the law by possessing and distributing it. But is Kinkos?

Kinkos is a neutral platform and cannot be blamed. Customers who walk in are not screened for the content of what they want to print, nor would it be scalable to have Kinkos personnel look over what goes into the copy machine before it goes in.

There is much that is similar about social media companies. No Facebook or YouTube employee looks over your shoulder as you post. Nor do you have to apply to an editorial board before your post shows up. But there the similarity ends.

When you make your copies at Kinkos, you walk away with a stack; and there ends the relationship between Kinkos and your content, whether you were printing holiday cards or our less savory example above.

Of course, such is not the case with social media. For them, the content you post is the gold that they woo you for. Once it’s on their platform, it drives engagement. A recipe video you create might show up in YouTube’s recommendations. A linked news article might get thousands of likes and thereby show up in tens of thousands of Facebook News Feeds. A striking meme or a witty comment might travel far and wide among Instagram’s global users. Or your post might languish, with not a like or a comment on it, and thus not be very much seen.

A platform and a publisher

If you are watching carefully you would have spotted both functions right there. Social media companies play the passive role of a platform while they merely host third party content. But they play the active role of a publisher when they affirmatively push selected content to your feed. There is editorial judgment in which posts are heavily promoted and which others languish without any eyeballs on them. Who does this? Who is the editor that determines virality, based on what characteristic of the content?

We will get to that. But let’s work with the Kinkos analogy further. Imagine that copies of the prints you made at Kinkos stayed with them. Each morning, they picked a small subset of all the millions of pages that were copied at their nationwide offices the day before, to put into a newsletter. Recipes, memes, news articles, clever bon mots, and other selected content appeared in your mailbox daily, neatly formatted.

Would you then be so sanguine if they delivered child pornography in their curated newsletter? What if unbeknownst to you, Kinkos was delivering child pornography to the pedophile next door, while delivering bomb-making instructions to the psychopath across town, or delivering misinformation about vaccines to the frantic new mother across the street? Or if they delivered completely made up “news” that the candidate running in your district was prone to eat babies the day before an election?

Would we then forgive this imagined Kinkos—nothing at all like our Kinkos on Earth One—for their newsletters? No doubt they could plead that it wasn’t them that wrote up the fake news or the misinformation about vaccines—it was third party content from their customers. “We only chose the articles that we spotted people copying most avidly at our copy centers!”—they might say.

Nonetheless, they’re the ones who chose to distribute it. We would laugh them out of town if they shrugged their shoulders and professed helplessness at the content of their newsletters, the way social media executives regularly do.

When Algorithms do it, does it count?

I trust that you see the absurdity of executives pleading helplessness in the Kinkos thought experiment. You probably imagined a human, a Kinkos employee perhaps, picking out objectionable content to put in their newsletter. But of course, there are no human fingerprints on the content that social media companies push to your feed. Those choices are made entirely by algorithms.

Facebook’s news feed, their suggested pages and groups, Amazon’s virtual shelves, and YouTube’s recommended videos, are all highly curated lists; the contents of which are produced by exquisitely crafted algorithms, that these companies treat as their crown jewels. The teams that craft them are just as integral to the tech companies’ businesses as the editorial team that determines the stories in a newspaper is to a newsroom. So if it sends fake, nonfactual news your way, that represents a failure of its editorial function just as much as the Killian memos represented a failure of Dan Rather’s team at CBS News.

It is subtle, but you can tell that each time there is a scathing news article about propaganda promoted through social media, executives point to their algorithms, as if to absolve themselves. As if to say, “they did it, not us!”

Source: NTSB via ArsTechnica.com

Last year, an algorithm-controlled self-driving Uber car ran into a 49-year-old pedestrian crossing in front of it and killed her. The NTSB investigated the accident. While Uber waited for the results, they took all their self-driving cars off the road in several cities for nine months. Prosecutors considered charging Uber with a crime. The event caused a deep reckoning in the autonomous car industry and a widespread understanding that the technology is not ready to be put on the road yet.

Industry after industry has faced automation where software now does the job that humans used to do. But no one would accept errors made by software as any more forgivable than if the error was made by a human.

Whether Facebook’s news feed, YouTube’s “watch next” list, etc., are put together by algorithm or human editors, they represent just as much of a failure when they send fake news your way. A recent study found that Facebook was the biggest disseminator of fake news in the months leading up the the 2016 election. This article pins the percentage to 22%: almost a quarter of all the fake news spread in the months before the 2016 election was spread by Facebook. But that woefully understates the problem: the truth is, for many voters, the entirety of their Facebook news feed was fake.

The problem is global. Investigative journalist Maria Ressa of the Philippines has called Facebook the “fertilizer of democratic collapse” in her country due to the fake news spread through it. Myanmar’s military instituted a propaganda campaign through inauthentic pages and articles on Facebook that led to genocide against the country’s Rohingya Muslims.

To be clear, the problem isn’t, and has never been, merely the fact that social media companies merely host objectionable content on their platforms. The problem, rather, is that they selectively pump the filth out.

An incomplete Business Model

There was a time in America when rivers frequently caught on fire. Just the Cuyahoga River alone, that runs through Cleveland and empties into Lake Erie, caught on fire fifteen times during the Gilded Age: the age when barons like John D. Rockefeller were transforming the Midwest by creating industrial behemoths like Standard Oil.

Of course, as historian John Grabowski has said, “technically, rivers never burn. It’s the crap on them.” Indeed, companies throughout the fast-industrializing Midwest didn’t have much of a plan for managing their waste. They just dumped it all into the river to the point where it “oozed rather than flowed”.

Cuyahoga River fire (source: Cleveland State University Library via alleghenyfront.org)

For decades, people lived with what Cleveland’s mayor called the “open sewer through the center of the city” because they assumed that a polluted commons, such as the river, was simply the cost of doing business. These companies were creating jobs and wealth; if they were forced to take on the cost of managing their refuse—a cost that they simply hadn’t considered while they were growing their businesses—wouldn’t they simply leave?

Spoiler: they didn’t. It took the combined forces of the environmental movement, the 1969 Cuyahoga River Fire going on the cover of Time, and a willing government: but we got the Clean Water Act, which made companies responsible for their own pollution of common waterways. Now, refineries build the cost of managing effluents into the cost of doing business (though they might chafe against it). Standard Oil did not do this of their own accord. We made them.

How it came to this

The root of the issue for social media companies is that their algorithms were not built for fact-checking. There is a huge cost (to them) of performing this role, a role that their businesses were not built to handle. They are not going to curb their pollution of our information flows unless we make them.

Tech companies often give the impression that they blundered into news-publishing absent-minded, without any forethought. Their businesses are built for engagement with a different sort of content. The more they can get you to click, follow, comment on, and buy, the better they do. This is what their recommendation algorithms were essentially built for—to maximize eyeball time.

Recommendation algorithms were created right around the turn of the century, when digital content businesses were starting to host such massive inventories that navigating it would have been all but impossible without algorithms acting as your personal guide to help you find content that you might like. Pandora, the internet radio station, was one of the first businesses that ran on such algorithms. It married together the characteristics of songs with your stated preferences, and the preferences of other users who shared your tastes, to create an extremely popular customized radio station. Amazon’s inventory of products is so large that without their algorithm-powered “Recommended for you” shelves, you’d be lost; with them, you might buy a few. 400 hours of video are uploaded to YouTube each minute; without their “Recommended” and “Watch next” lists, you wouldn’t have a hope of finding anything that you’d like to watch.

Such algorithms have incredible reach. They have allowed the businesses that they power to achieve huge market shares in whatever niche they inhabit. They are built to be democratic down to their very bones, like the world’s most solicitous concierges, who give you whatever content you might enjoy with no judgment. This is appropriate when it comes to matters of taste. There should be no judgment about the movie you choose to watch or the soap you choose to buy.

But once these businesses had achieved dominance over internet eyeballs, they found themselves in a new business entirely. They also became purveyors of factual information and news. This was already a red flag but nobody noticed. Certainly not the barely-out-of-adolescence tech CEOs who saw the news business as yet another arena for their engagement-driven rapacity for eyeballs, like movies or songs or post from friends.

But clearly this doesn’t work for factual content. An anti-vax article might gain more likes than an article debunking it, but that doesn’t make it science. A holocaust denial video might get more views than a documentary, but that doesn’t make it history. An article about a politician who eats babies might get thousands of outraged comments but that doesn’t make it news.

It was a category error. Because a democratic “give-them-more-of-what-they-want” attitude seems to work for recommending movies, they thought it was also appropriate for deciding what counts as news.

This key difference between types of content has simply not been understood, nor appreciated, by tech executives, most of whom are engineers at heart. As journalist Kara Swisher often chides them, “take a humanities course, for God’s sake!”

Chafing against

Much like oil refineries back in the day, social media companies have chafed against taking responsibility for the content their algorithms push into the ether.

Sometimes they take cover under the notion that since it isn’t their content, they own no responsibility for it. YouTube executives explicitly made this argument when they decided to allow conservative firebrand Steven Crowder to remain on their platform even though he had used his channel to unleash homophobic attacks at journalist Carlos Maza. “Even if a video remains on our site, it doesn’t mean we endorse/support that viewpoint,” they said.

I’d like to remind tech executives that vectors count just as much, if not more, as that which is being propagated. We have a deep cultural association of rats and fleas with the Bubonic Plague that swept through Europe in the 14th century, but rats and fleas were merely the vectors—the plague was caused by a bacteria. Sparks in forests do no harm unless there is dry brush, in which case one has a wildfire.

Photo: Jim Watson/AFP/Getty Images

At other times public outrage at their insouciance is treated as a PR problem that will go away with proper “handling”, rather than as a concrete problem to be solved. Two of Facebook’s top executives have tried two different such PR approaches: Sheryl Sandburg, famously, tried to handle public outrage at Facebook’s fake news problem by doing a hit job against the main complainers. Mark Zuckerberg usually goes the opposite route: apologize profusely over and over, while still thumbing his nose at a panel of nine countries trying to hold him accountable.

Beaten down enough, tech executives sometimes turn around to blame us, the user community. This is clever, because of course recommendation algorithms use our own clicks as raw material to create more recommendations. But this gets to the core of my argument above. When it comes to matters of fact, the number of clicks on the article tell you nothing. As a mere consumer of news, I can have no independent information about whether a particular article is factual or not; it is the job of the publisher to not feed me garbage. No one asked social media companies to swallow up and destroy local media that was attempting to play the role of factual gatekeepers, however imperfectly; they bigfooted their way in and snatched this job. But it takes some gall to swallow up a business while disdaining to perform the role that it was performing.

Whether it is the pleading from activists, the outrage from users, or the fear of being regulated, social media companies have finally understood that fake news is a problem for them. Even as they have begun to devote resources to ensuring information hygiene, their efforts often come too late or are insufficient. After the Parkland mass shooting tragedy, YouTube finally took steps to remove conspiracy videos showing survivor David Hogg as a crisis actor, but not before it had already achieved No. 1 status. Facebook has been forced into having some human moderators; but rather than straightforwardly hire them, Facebook chooses to rent poorly paid, abysmally-treated contractors. But when it comes to fake news about themselves, Facebook is able to find the resources to do the job properly.

Social media executives tend to have a great deal of missionary zeal about the ultra-democratic, communitarian world they are creating and tend to treat these problems as the breaking of a few eggs on the way to a glorious omelette. But I’d like to remind them that the freely-polluting industrialists of the 19th century were also transfixed by their own virtue. They, too, thought of their own pollution of the commons as a unfortunate side-effect of the new world they were creating. They themselves, of course, moved their families uphill, away from the burning river, much like Silicon Valley executives often keep their own children away from their platforms. But can tech company executives thrive while democracy around them is collapsing, driven by their pollution of the commons? We will soon find out.

Print Friendly, PDF & Email