U.K. Lawmakers Slam Facebook, Google, Twitter on Hate Speech

  • Executives struggle to explain why some content not removed
  • Companies say they are doing better, lawmakers are skeptical
Facebook Adds Content Control Staff to Fight Hate Speech

Skeptical British lawmakers grilled social media company executives on their progress tackling hate speech and violent extremism Tuesday, questioning them harshly over why specific content hadn’t been removed from their platforms.

Executives from Facebook Inc.Twitter Inc, and Google, a division of Alphabet Inc. that runs YouTube, testified to the House of Commons Home Affairs committee that they were making rapid progress in tackling extremism and hate speech, hiring thousands more employees to review content and increasingly using artificial intelligence software to police their platforms.

But lawmakers repeatedly confronted the executives with specific examples of abusive or extremist content that wasn’t taken down, in some cases despite repeated requests.

The hearing added to the growing list of woes confronting the big social media platforms in both the U.K. and globally. The companies face mounting pressure over potential antitrust violations, alleged underpayment of taxes and violations of privacy rights, as well as concerns that their platforms promote incivility and extremism. Twitter, Facebook and Google are facing public anger over the spread of fake news and the use of their networks by Russian-linked groups to influence the U.S. presidential election last year. And in another hearing room at Westminster on Tuesday, a different Parliamentary committee heard evidence about the role of fake news in the U.K.’s 2016 vote on European Union membership.

For more about the Russian influence on social media, see this Quick Take

In a representative exchange during the Home Affairs Committee hearing, Yvette Cooper, a Labour Party politician who chairs the committee, asked why anti-Semitic tweets directed at Luciana Berger, another Labour politician, hadn’t been removed even after Cooper told Twitter executives about them. "What do we have to do?" Cooper asked. "We raised a clearly vile, anti-Semitic tweet with your organization, it was discussed and it is still there. And everyone accepted -- you accepted, your predecessor accepted -- that it is unacceptable and yet it is still there."

Sinead McSweeney, Twitter’s vice president for public policy and communications in Europe, said she thought the tweet would be a violation of the company’s policies, but that she would have to get back to the committee with an explanation of why it hadn’t been removed.

Similarly, Nicklas Lundblad, Google’s vice president of public policy for Europe, struggled to explain why it had taken Cooper eight months and a direct appeal to Google’s general counsel to get a particular extremist video from a far right group removed. "I can understand that is disappointing," he said.

Cooper pointed out that the same video was now available on Facebook and asked how this could be possible if the three biggest social media networks were now sharing digital fingerprints of extremist videos, as the companies have said.

Digital Fingerprints

Simon Milner, Facebook’s policy director for the U.K., said that so far the companies were only sharing digital fingerprints of terrorist videos from the Islamic State and al Qaeda, drawing a rebuke from Cooper. "It is incomprehensible you are not sharing this kind of information about other kinds of violent extremism," she said.

Tim Loughton, a Conservative Party lawmaker, also bashed Twitter, asking why it hadn’t removed tweets using a hashtag that promoted violence against members of his party. McSweeney said that policing 500 million tweets a day wasn’t simple and that "no country in the world has politicians as a protected political class," an answer that didn’t satisfy Loughton. "You are profiting from the fact that people use your platforms to further the ills of society and you are allowing them to do it and you are not doing simple things to prevent it," he said.

The U.K. is considering bringing in regulations that would require social media companies to remove extremist content within two hours or face substantial fines.

The company executives said they were already making rapid progress towards meeting voluntary standards they agreed to with the Group of Seven, the forum that coordinates policy among democracies, and a similar voluntary policy the European Union has encouraged. Twitter on Monday pulled some white supremacists and other extremists from its platform, though some prominent, controversial people continue to have live accounts.

Facebook currently removes 83 percent of terrorist content within one hour and its goal is to be able to remove 100 percent within two hours, Milner said. Google’s YouTube currently removes 50 percent within two hours and 70 percent within eight hours, Lundblad said. Twitter spots and removes 75 percent of accounts deemed terrorist content before they issue a single tweet, McSweeney said, adding "so the question of the one hour is increasingly irrelevant."

The U.K. is also considering imposing a levy on social media companies and internet service providers that would help fund education efforts aimed at making children safer online. Milner said Facebook was in favor of such education but "had concerns" about such a fee since it seemed based on similar "vice taxes" imposed previously on cigarettes, alcohol and gambling. "We don’t agree that social media is a vice like alcohol or tobacco or gambling," he said.

    Before it's here, it's on the Bloomberg Terminal.