Why Facebook Can’t Fix Itself

Andrew Marantz of the New Yorker –

The platform is overrun with hate speech and disinformation. Does it actually want to solve the problem?

When Facebook was founded, in 2004, the company had few codified rules about what was allowed on the platform and what was not. Charlotte Willner joined three years later, as one of the company’s first employees to moderate content on the site. At the time, she said, the written guidelines were about a page long; around the office, they were often summarized as, “If something makes you feel bad in your gut, take it down.” Her husband, Dave, was hired the following year, becoming one of twelve full-time content moderators. He later became the company’s head of content policy. The guidelines, he told me, “were just a bunch of examples, with no one articulating the reasoning behind them. ‘We delete nudity.’ ‘People aren’t allowed to say nice things about Hitler.’ It was a list, not a framework.” So he wrote a framework. He called the document the Abuse Standards. A few years later, it was given a more innocuous-sounding title: the Implementation Standards.

These days, the Implementation Standards comprise an ever-changing wiki, roughly twelve thousand words long, with twenty-four headings—“Hate Speech,” “Bullying,” “Harassment,” and so on—each of which contains dozens of subcategories, technical definitions, and links to supplementary materials. These are located on an internal software system that only content moderators and select employees can access. The document available to Facebook’s users, the Community Standards, is a condensed, sanitized version of the guidelines. The rule about graphic content, for example, begins, “We remove content that glorifies violence.” The internal version, by contrast, enumerates several dozen types of graphic images—“charred or burning human beings”; “the detachment of non-generating body parts”; “toddlers smoking”—that content moderators are instructed to mark as “disturbing,” but not to remove.

Facebook’s stated mission is to “bring the world closer together.” It considers itself a neutral platform, not a publisher, and so has resisted censoring its users’ speech, even when that speech is ugly or unpopular. In its early years, Facebook weathered periodic waves of bad press, usually occasioned by incidents of bullying or violence on the platform. Yet none of this seemed to cause lasting damage to the company’s reputation, or to its valuation. Facebook’s representatives repeatedly claimed that they took the spread of harmful content seriously, indicating that they could manage the problem if they were only given more time. Rashad Robinson, the president of the racial-justice group Color of Change, told me, “I don’t want to sound naïve, but until recently I was willing to believe that they were committed to making real progress. But then the hate speech and the toxicity keeps multiplying, and at a certain point you go, Oh, maybe, despite what they say, getting rid of this stuff just isn’t a priority for them.”

There are reportedly more than five hundred full-time employees working in Facebook’s P.R. department. These days, their primary job is to insist that Facebook is a fun place to share baby photos and sell old couches, not a vector for hate speech, misinformation, and violent extremist propaganda. In July, Nick Clegg, a former Deputy Prime Minister of the U.K. who is now a top flack at Facebook, published a piece on AdAge.com and on the company’s official blog titled “Facebook Does Not Benefit from Hate,” in which he wrote, “There is no incentive for us to do anything but remove it.” The previous week, Guy Rosen, whose job title is vice-president for integrity, had written, “We don’t allow hate speech on Facebook. While we recognize we have more to do . . . we are moving in the right direction.”

It would be more accurate to say that the company is moving in several contradictory directions at once. In theory, no one is allowed to post hate speech on Facebook. Yet many world leaders—Rodrigo Duterte, of the Philippines; Narendra Modi, of India; Donald Trump; and others—routinely spread hate speech and disinformation, on Facebook and elsewhere. The company could apply the same standards to demagogues as it does to everyone else, banning them from the platform when necessary, but this would be financially risky. (If Facebook were to ban Trump, he would surely try to retaliate with onerous regulations; he might also encourage his supporters to boycott the company.) Instead, again and again, Facebook has erred on the side of allowing politicians to post whatever they want, even when this has led the company to weaken its own rules, to apply them selectively, to creatively reinterpret them, or to ignore them altogether.

Dave Willner conceded that Facebook has “no good options,” and that censoring world leaders might set “a worrisome precedent.” At the same time, Facebook’s stated reason for forbidding hate speech, both in the Community Standards and in public remarks by its executives, is that it can lead to real-world violence. Willner went on, “If that’s their position, that hate speech is inherently dangerous, then how is it not more dangerous to let people use hate speech as long as they’re powerful enough, or famous enough, or in charge of a whole army?”

The Willners left Facebook in 2013. (Charlotte now runs the trust-and-safety department at Pinterest; Dave is the head of community policy at Airbnb.) Although they once considered themselves “true believers in Facebook’s mission,” they have become outspoken critics of the company. “As far as I can tell, the bulk of the document I wrote hasn’t changed all that much, surprisingly,” Dave Willner told me. “But they’ve made some big carve-outs that are just absolute nonsense. There’s no perfect approach to content moderation, but they could at least try to look less transparently craven and incoherent.”

In a statement, Drew Pusateri, a spokesperson for Facebook, wrote, “We’ve invested billions of dollars to keep hate off of our platform.” He continued, “A recent European Commission report found that Facebook assessed 95.7% of hate speech reports in less than 24 hours, faster than YouTube and Twitter. While this is progress, we’re conscious that there’s more work to do.” It is possible that Facebook, which owns Instagram, WhatsApp, and Messenger, and has more than three billion monthly users, is so big that its content can no longer be effectively moderated. Some of Facebook’s detractors argue that, given the public’s widespread and justified skepticism of the company, it should have less power over users’ speech, not more. “That’s a false choice,” Rashad Robinson said. “Facebook already has all the power. They’re just using it poorly.” He pointed out that Facebook consistently removes recruitment propaganda by ISIS and other Islamist groups, but that it has been far less aggressive in cracking down on white-supremacist groups. He added, “The right question isn’t ‘Should Facebook do more or less?’ but ‘How is Facebook enforcing its rules, and who is set up to benefit from that?’ ”

In public, Mark Zuckerberg, Facebook’s founder, chairman, and C.E.O., often invokes the lofty ideals of free speech and pluralistic debate. During a lecture at Georgetown University last October, he said, “Frederick Douglass once called free expression ‘the great moral renovator of society.’ ” But Zuckerberg’s actions make more sense when viewed as an outgrowth of his business model. The company’s incentive is to keep people on the platform—including strongmen and their most avid followers, whose incendiary rhetoric tends to generate a disproportionate amount of engagement. A former Facebook employee told me, “Nobody wants to look in the mirror and go, I make a lot of money by giving objectively dangerous people a huge megaphone.” This is precisely what Facebook’s executives are doing, the former employee continued, “but they try to tell themselves a convoluted story about how it’s not actually what they’re doing.”“Who sanitizes the sanitizer?”

Continue…

About Radnor Reports

Ken Feltman is past-president of the International Association of Political Consultants and the American League of Lobbyists. He is retired chairman of Radnor Inc., an international political consulting and government relations firm in Washington, D.C. Known as a coalition builder, he has participated in election campaigns and legislative efforts in the United States and several other countries.
This entry was posted in Facebook, The New Yorker. Bookmark the permalink.