How Facebook Can Fight the Hate

The platform has long been an accelerant for extremist thought. Can it also be a deterrent?

Illustrator: Kurt Woerpel for Bloomberg Businessweek

Last summer, Johannes Baldauf, an anti-hate-speech activist from Berlin, got a request for help from an unexpected place: Facebook Inc. When it came to stemming the spread of extremist messaging, Baldauf was accustomed to seeing the giant social network as part of the problem. Now it was asking him and other activists to act as a kinder, gentler, online version of Quentin Tarantino’s Nazi-hunting Inglourious Basterds squad. Their mission was to come up with a social media campaign that might make Germans less susceptible to the wave of fake news and right-wing propaganda scapegoating Europe’s growing population of immigrants and refugees.

Baldauf, 36, and his team at the nonprofit Amadeu Antonio Foundation specialize in an esoteric internet art known as “counternarrative.” The basic idea is to exploit the same online tools extremists use but in a way that undermines their hateful messages. During a daylong brainstorming session, the group came up with a meme that subtly mocks people who blame minorities for the mundane frustrations of daily life, such as packed subway cars.

They released images of everyday setbacks—a bad hair day, a cracked iPhone—accompanied by the phrase Kein Grund, Rassist zu werden (“No Reason to Be Racist”). Before long, internet users started contributing their own photos of scenarios that might, absurdly, spark bigotry. The hashtag began trending on Facebook, Instagram, and Twitter. Baldauf says it was “sort of fun.” It was also a good demonstration of how Facebook thinks social media can combat the vitriol it’s so often accused of spreading.

Counternarrative is a little-known but important element of Facebook’s plan to address online extremism. The issue has taken on new urgency since the election of Donald Trump and the rise of right-wing, anti-immigrant parties in Western Europe and Asia. In Germany, where Facebook has 29 million members, lawmakers recently introduced legislation requiring internet companies to remove content flagged as hate speech within 24 hours. The proposed fine is €50 million ($55.6 million) for the companies and as high as €5 million for executives charged with failing to act quickly. This is occurring as Facebook’s relationship with European regulators grows increasingly frosty: On May 18 the EU’s antitrust chief fined the company €110 million for making misleading statements about a 2014 acquisition.

Despite spending millions of dollars and hiring armies of contractors around the world, social media companies often can’t delete hateful posts fast enough. Extremists set up accounts as quickly as the old ones are shut down, and it’s easy for bad stuff to slip through. “We review over 100 million pieces of content every month,” Facebook Chief Executive Officer Mark Zuckerberg wrote on his page on Feb. 16. “Even if our reviewers get 99% of the calls right, that’s still millions of errors over time.” The chance for something to slip through is particularly high with live video posts. In April a man in Thailand allegedly broadcast his child’s murder, and another man, in Cleveland, uploaded a video of himself killing someone; both videos remained accessible for hours. In response, Facebook announced it would hire 3,000 more people to monitor content. Even that may not be enough.

There’s “so much horrific stuff out there,” says Baldauf, a recipient of Facebook funding. “Social media can also be a tool for education.”
Photographer: Fabian Brennecke for Bloomberg Businessweek

That’s why the company hopes activists such as Baldauf employing innovative techniques might help attack extremism and hatred at its source. Last year, Sheryl Sandberg, Facebook’s chief operating officer, traveled to Berlin to announce something called the Online Civil Courage Initiative, or OCCI. The effort, initially backed with €1 million, distributes small grants and advertising credits to anti-extremist organizations, with the goal of helping activists produce counternarrative and antihate campaigns.

A million euros might seem like chump change for Facebook, but it’s meaningful to the practitioners of counternarrative, who often operate on a shoestring. In a 2016 pilot, one Facebook-based campaign with a budget of just $3,750 reached more than 670,000 people. (Facebook says it’s contributed additional money to OCCI since Sandberg’s announcement but won’t say how much.) There’s “so much horrific stuff out there,” says Baldauf, who used to help police hate speech on StudiVZ, a social network once popular in Germany. “Social media can also be a tool for education.”

Facebook’s investment in counternarrative fits squarely into a new agenda unveiled by Zuckerberg in his February post. The 33-year-old co-founder says he wants Facebook to change the nature of communities—not just online but also in the real world. He wants these communities to be safer, more inclusive, more caring, better informed, and more civically engaged. It’s exactly the sort of social engineering Facebook has tried to distance itself from in the past. Counternarrative offers one of the first chances to see how Zuckerberg’s new vision might play out.

While Facebook had dabbled in ways to encourage counternarrative over the past decade, it began a more systematic push in February 2015, shortly after a White House summit on stifling violent extremism. At the event, President Obama pressured representatives of Silicon Valley to do more to combat terrorism. Heeding the call, Facebook started holding student competitions and dayslong hackathons to develop digital tools to push back against online hate. Early efforts had limited results, because the projects that emerged tended to die as soon as the semester ended or the hackathon finished.

Then, in the summer of 2015, Facebook found itself in the center of a maelstrom in Germany. The number of online hate incidents was soaring as hundreds of thousands of refugees arrived there, fleeing civil wars in Syria and Afghanistan. In some cases, online hate led directly to real-world violence. That August a loose association of neo-Nazis, right-wing soccer hooligans, and members of the anti-Islamic movement Pegida used Facebook to incite demonstrations against a refugee shelter in Heidenau, near Dresden. Three nights of violent clashes injured more than 30 police officers.

As a result, German Justice Minister Heiko Maas began threatening to fine Facebook for facilitating hate speech, and Chancellor Angela Merkel confronted Zuckerberg over his company’s record on the matter during a luncheon at the United Nations. “We need to do better,” he told her, in comments inadvertently picked up by a live microphone. A few months later he told a crowd at a town hall-style event in Berlin that “we hear the message loud and clear” and “hate speech has no place on Facebook and in our community.”

The company wanted to be seen as proactive, according to people familiar with its executives’ thinking at the time, so Sandberg announced OCCI. The plan was to figure out what techniques might be most effective and then help thousands of activists use that knowledge to produce more—and better—counternarrative campaigns. The project was global, but there was no doubt why Sandberg had chosen to announce the program in Berlin.

To figure out what counternarratives work best, it helps to know what exactly to oppose. So Facebook tapped the International Centre for the Study of Radicalisation and Political Violence at King’s College London to reverse-engineer extremists’ digital propaganda and recruitment efforts. Peter Neumann, who runs the center, says the biggest challenge is figuring out the link between the popularity of content, as measured by shares or likes, and actual changes in behavior. In most cases, hate speech has no impact on people who see it. But for a minority, exposure to Islamic State content, for example, can lead to growing admiration for the terrorist group. An even smaller subset might be motivated to travel to Syria or carry out an attack.

Neumann says this works the same way any viral social media campaign hawking a consumer product does and that counterterrorists can learn from the ad industry. “We should think about it like selling Coca-Cola,” he says. “But people are always trying to reinvent the wheel because it has to do with terrorism.”

Social media can accomplish only so much. Take one famous campaign that predated Facebook’s counternarrative efforts: #bringbackourgirls. The hashtag—often handwritten on signs and held up in selfies—was used more than 1 million times in the three weeks after the Nigerian terrorist group Boko Haram kidnapped 276 schoolgirls from the village of Chibok in April 2014. Luminaries such as Michelle Obama and actress Salma Hayek helped propel the meme, and the campaign inspired demonstrations in the Nigerian capital city of Abuja, London, and Los Angeles. By some metrics used to judge online marketing, it was a phenomenal success. What’s less clear is whether there was any link between the campaign and the Nigerian government eventually negotiating the release of about 100 of the girls. The hashtag certainly hasn’t slowed or weakened Boko Haram.

This is why Baldauf remains skeptical of memes even as he works to create them. More effective, he says, is a much more direct, but more difficult, technique: using Facebook and other social media to identify and engage with young people who are in danger of falling under the influence of right-wing groups. In addition to promoting viral campaigns such as No Reason to Be Racist, Baldauf and his colleagues scour social media for signs that internet users might be drifting toward right-wing extremism. Then they try to engage them before extremist groups get a chance to do the same. Often this is more about providing emotional support or a feeling of belonging than engaging in an ideological debate. “This is not sexy,” Baldauf says. “This is boring stuff, sitting in front of a computer all day searching for people.”

Facebook funds another group, London-based Institute for Strategic Dialogue, to help run OCCI. It conducted one of the most promising experiments on using social media to reach people at risk of joining violent groups. ISD researchers employed popular Facebook advertising tools to identify 154 people at risk of joining ISIS or white nationalist groups. The tools allow marketers to target individuals by affinity groups, device type, age, gender, and other factors. ISD used it to find members associated with hate groups and then refined the list by looking at account photos and the nature and tone of the posts. Just being a member of an extreme political or religious group wasn’t enough. The researchers wanted to find people advocating violence or liking the posts of those who did.

Using a Facebook tool (since discontinued) that allowed users to pay to contact nonfriends, the researchers then sent those members messages from former extremists who’d been trained in deradicalization techniques. More than a third of the emails led to what the researchers classified as “sustained engagement,” with the targets trading at least five notes with the former extremists. Some asked for help leaving extremist groups or were curious why the former extremists had broken with radicalism.

The program highlighted the way content removal can backfire. A quarter of the people identified as likely targets had their accounts removed by Facebook for posting extremist content during the course of the program. That may not be the best strategy. If a British teen reposts a jihadi video and gets his account taken away, for example, it doesn’t hurt Islamic State. Instead, it squanders a chance to reach the teen with countermessaging and potentially prevent his being recruited by the terror group.

Sasha Havlicek, the institute’s CEO, says counternarrative campaigns such as No Reason to Be Racist are designed to reinforce values such as tolerance and make communities “resilient” to extremist propaganda. She points out that public-health campaigns aimed at the general population have changed behavior—reducing rates of smoking and unsafe sex, for instance. She thinks the same will hold true for curbing racism and lessening the allure of Islamic extremism, provided those messages are delivered by people whom the intended audience thinks are credible. In a 2016 ISD study, counternarrative videos on Facebook, Google, and Twitter produced both positive and negative responses among users. Havlicek acknowledges that the evidence is tentative at best that exposure to these videos changed viewers’ attitudes or made them more resistant to extremist messages. That’s why she wants to conduct further research.

Other internet giants are experimenting with similar techniques. In 2015, Google Jigsaw, a division of Alphabet Inc. dedicated to using technology to eradicate online extremism, ran a pilot called the Redirect Method in which it altered results for search queries that people leaning toward joining Islamic State might use, such as “What is jihad?” People searching for these phrases were then shown ads for counternarrative videos. The click-thru rate was 75 percent better than for the average search ad, Google says.

Facebook does something similar when it serves up suicide prevention messages to users whose online activity suggests they may be contemplating killing themselves. The company could push counternarrative content—or direct individual intervention—to those whose behavior suggests they are enchanted by white nationalism or falling under the influence of Islamic State. But Facebook has been reluctant to go too far down this road. One problem, it says, is that unlike with suicide prevention, there’s no standard counterextremism content that would help in every case.

There are bigger philosophical questions, too. “What’s the trade-off between filtering and censorship? Freedom of experience and decency?” Yann LeCun, Facebook’s director for artificial intelligence research, asked reporters during a roundtable at the company’s Menlo Park, Calif., headquarters late last year. “The technology either exists or can be developed. But then the question is, how does it make sense to deploy it? And this isn’t my department.” Zuckerberg, whose department it certainly is, seems disinclined to get directly involved in serving counternarrative to users, despite his latest pledge to deploy Facebook to better societies. “Research shows that some of the most obvious ideas, like showing people an article from the opposite perspective, actually deepen polarization,” he wrote in his February post.

Counternarrative efforts also face hostility from right-wing groups, who charge that Facebook is unleashing the thought police. When Sandberg unveiled OCCI, for example, Canadian television commentator Ezra Levant, a sort of Rush Limbaugh of the North, lambasted it as “an official censorship program that explicitly targets right-wing content.”

On Jan. 18 about 200 journalists, lobbyists, and parliamentary staff members gathered in a large committee room on the fourth floor of Berlin’s historic Reichstag. They were there for a debate on Facebook, hate speech, and fake news, organized by Merkel’s Christian Democratic Union. On the panel, Eva-Maria Kirschsieper, Facebook’s top lobbyist in Germany, faced off against hostile lawmakers, academics, and a TV broadcaster who’d successfully sued the company for being too slow to remove offensive comments.

Kirschsieper, flustered and seemingly on the defensive, insisted that Facebook was serious about combating hate speech and fake news and emphasized that the issues were “highly complex,” with no easy fixes. The politicians were having none of it. Social media had devolved into a zone where “insults, denunciations, and libel are commonplace,” charged Volker Kauder, Merkel’s top parliamentary lieutenant. If part of the goal of OCCI was to combat not only hate speech but also rising anti-Facebook sentiment in Europe, it was failing.

Richard Allan, Facebook’s top European lobbyist, says the company hasn’t been comfortable emphasizing its counternarrative campaigns because they’re experimental. “We need to try something out, and if it is effective, we should scale it,” he says. But in the meantime, “we don’t want to oversell what it is we’re doing.”

There have been some small, encouraging victories. In 2015, Baldauf’s team made contact via Facebook with a student in Dresden who seemed captivated with neo-Nazism. The student wasn’t interested in talking, and the conversation was brief. Months later, in January 2016, a scandal erupted in Germany about a series of sexual assaults allegedly committed by immigrants in Cologne during New Year’s Eve celebrations. The story proved false, and soon after it was debunked, the student got back in touch with Baldauf’s team and said he wanted to talk.

This time the dialogue lasted much longer, and the student eventually came to see how he’d been misled by right-wing propaganda. “Sometimes all it takes,” Baldauf says, “is talking to someone with a different point of view.”

—With Stefan Nicola, Elliott Snyder, Birgit Jennen, Rainer Buergin, and Sarah Frier

Editor's note: A previous version of this article improperly quoted Ross Frenett, a former employee of the Institute for Strategic Dialogue.