European Commission Demands Internet Firms Do More on TerrorBy
Recommends more automatic tools to detect, block illegal posts
Asks for better coordination with government and one another
The European Commission is calling upon social media companies including Facebook Inc. and Alphabet Inc. to develop a common set of tools to detect, block and remove terrorist propaganda and hate speech.
In guidelines issued Thursday, the commission asked the online platforms to appoint contact persons that would allow them to be reached quickly with requests to remove illegal content. It asked them to lean more heavily on networks of "trusted flaggers" -- experts in what constitutes illegal content -- as well as making it easier for average users to flag and report possible extremist content.
"Illegal content should be removed as fast as possible, and can be subject to specific timeframes, where serious harm is at stake, for instance in cases of incitement to terrorist acts," the commission said.
In announcing the new set of recommendations, EU Justice Commissioner Vera Jourova said that she had stopped using Facebook and deleted her account after finding the social network too often served as "a highway for hatred."
The commission did not specify exactly how quickly social media companies should take down content, saying it would analyze the issue further. In May 2016, a number of social media companies, including Facebook Inc., Twitter Inc., and Google’s YouTube voluntarily committed to trying to take down illegal content within 24 hours. Under this program, removal of flagged content with the 24-hour window has gone from 30 percent to 60 percent, the EU said Thursday.
Since then, Germany has passed a law requiring hate speech to be removed within 24 hours of it being flagged, with penalties of up to 50 million euros ($58.8 million) for repeated failures to comply. British Prime Minister Theresa May earlier this month proposed new rules that would require internet companies to take down extremist content within two hours.
Facebook said it was studying the Commission’s recommendations. Reached for comment, Google and Twitter did not immediately respond.
The commission said online platforms should "introduce safeguards to prevent the risk of over-removal." It did not specify what these safeguards should be.
In arguing against the new German law, Facebook said that the large fines and tight deadlines for content removal only served to encourage it to err on the side of taking down questionable content, potentially harming free speech.
The commission also said the internet companies should take steps to dissuade users from repeatedly uploading illegal content and encouraged them to develop more automatic tools to prevent the re-appearance of content that had previously been removed.
Facebook, Google, Twitter and Microsoft Corp. teamed up in December 2016 to create a shared database of "digital fingerprints" for videos that any of the companies remove for violating their policies on extremist content. If someone tries to upload the same video to a different social media platform, it is automatically flagged for review.
The guidelines the commission issued Thursday are non-binding recommendations. But it held out the prospect of future legislation if the companies did not take additional steps along the lines it is suggesting by May 2018.
For years, sites like Facebook and YouTube have largely relied on hundreds of contractors and employees to manually review posts that users flag for violating their terms of service. While this process was far from perfect, company executives long insisted that automated systems -- which rely on artificial intelligence -- were not yet sophisticated enough to handle this process.
In the past year, as these companies have come under greater political and legal pressure to do more to address terrorist propaganda, hate speech and fake news, they have begun leaning more heavily on automated systems.
"AI can spot a terrorist’s insignia or flag, but has a hard time interpreting a poster’s intent," Monika Bickert, Facebook’s global head of policy, said last week in New York at a meeting between government leaders and social media executives on the sidelines of the opening of the United Nations General Assembly. "That’s why we have thousands of reviewers, who are native speakers in dozens of languages, reviewing content - including content that might be related to terrorism - to make sure we get it right."
Bickert said the company had gotten better at identifying and removing terrorist content and that most of the material it takes down, it now finds on its own without having to rely on users to flag it. She said the company also already had built "strong mechanisms" for working with law enforcement.