Europe Gets U.S. Tech Leaders to Self-Censor

Facebook, Twitter, YouTube and Microsoft have agreed to let private groups decide which content is "hate speech" and should be removed.

Voluntarily silenced.


There's a lot about European regulations, or regulatory intentions, that U.S. Internet giants don't like. They hate being described and treated as monopolies, and a mention of paying taxes where they operate -- as European countries have long wanted them to do -- instantly puts them on the defensive. Yet ask Facebook, Microsoft, Twitter and YouTube to censor their content, and they will happily oblige. Of all the U.S. rules that have allowed them to get as big as they have become, freedom of speech appears to be least important.

The four U.S. companies have accepted a European Union-dictated code of conduct, which obliges them to "review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content." The reviewing is to be done by "civil society organizations" and "trusted reporters": the EU and its member states are to "ensure access" to them.

This arrangement effectively outsources the enforcement of Europe's hate speech laws to nongovernmental organizations with a direct interest in describing certain kinds of speech as hate-inciting -- and to private companies that find it easier to delete a comment or a post than to fight those organizations or governments and risk losing money.

European laws don't hold free speech quite as sacred as does the U.S. Constitution. Whereas in the U.S., an individual might cite First Amendment protection for such offensive speech, in 13 of the 28 EU member states, denying the Holocaust is a crime. (EU Commissioner Vera Jourova finds that number "disgraceful" -- too low.) All EU countries have laws against racially motivated threats and the direct incitement to religious or xenophobic violence, and some have also legislated against insults, though that is only suggested, not required, by the EU. As is often the case in Europe, the legal frameworks vary widely from country to country.

The EU countries are unable to police Facebook, with 293,000 profiles updated every minute. They want Facebook to police itself. That's not easy for a U.S. company, coming from a more permissive free speech environment and probably unable to interpret what constitutes a threat or an insult under the laws of each of the 28 countries. And, as Facebook chief executive Mark Zuckerberg pointed out last year, "the problem is if you break the law in a country, then oftentimes that country will just block the whole service entirely."

That's where the NGOs and "trusted reporters" come in. Last fall, German Chancellor Angela Merkel asked Zuckerberg whether Facebook was working on getting hate posts off the network, and he assured her he was. Just weeks before, the company had joined forces with a German voluntary monitoring group to track and remove xenophobic content.

In France, Facebook and the other networks have been slower to find such partners, and that has resulted in problems. This month, three high-profile French anti-racist and anti-homophobia groups sued Facebook, Twitter and YouTube for being too selective in following up on their requests for the removal of xenophobic or homophobic content. Presumably, following the code of conduct will rid the social networks of such problems: If a country says a certain organization should be trusted on removal requests, then it's safer to do it than not to.

That certainly simplifies things for the companies. Russians and Ukrainians who post about the two countries' conflict know how it is when anybody can complain about anybody. Last year, a veritable war of bans was unleashed when both sides flooded Facebook with requests that their opponents' posts be removed for alleged hate speech. The company looked for racially charged epithets or anything offensive in the posts, but as often as not, users were blocked by mistake or following hundreds of abuse reports fired off by bots. Ukrainians even petitioned Zuckerberg to "block abuse reports from Russia." He refused, telling them to be less aggressive -- but then Russia and Ukraine are not EU members, and Facebook hasn't signed up to a code of conduct there.

Streamlined, semi-automatic censorship by interest groups should work better than a free-for-all. That, of course, is a problem for "politically incorrect" right-wing populists; the "trusted reporters" are highly likely to classify their utterances as hate speech. Besides, by subscribing to the code of conduct, the Internet giants have also promised to "continue their work in identifying and promoting independent counter-narratives" -- to be proactive in fighting the viewpoints associated with the loose definition of hate speech.

The social networks started as neutral platforms where anybody could say anything and be rewarded with admiration or ridicule. As they grew into huge businesses, however, an aversion to getting in trouble with governments took over, especially as European countries have attacked them on other fronts. They could have told the governments it was their responsibility to enforce their laws, obtain proper court orders, prosecute offending users and put in requests for the removal of noxious material. They chose an easier path. Rather than let their users accept or reject the various views that their peers might express, the companies signed up to police them with the help of "civil society" activists.

That is a failure of judgment that probably won't matter to the vast majority of people who use the social networks to talk to a small circle of friends and relatives. It is, however, important for news media that use these networks as platforms for content delivery. Laws limiting free speech have a tendency to change in response to terrorist attacks, electoral upsets, changes in public attitudes. Russians and Turks can attest to how quickly anti-terrorist legislation can turn into a system of censorship and suppression. Europe is not immune to versions of these developments. The U.S. giants' willingness to work with governments and advocacy groups to uphold speech limitations makes them unreliable as platforms.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

    To contact the author of this story:
    Leonid Bershidsky at

    To contact the editor responsible for this story:
    Philip Gray at

    Before it's here, it's on the Bloomberg Terminal.