As my colleague Eliza Kern has reported, Facebook has apologized for the way it handled “hate speech” against women on the social network, after repeated complaints from advocacy groups alleging that it was turning a blind eye to what was clearly offensive behavior. This has been hailed by some as a victory, since Facebook has admitted that its policies concerning such content are weak. But even if its policies are improved, do we really want Facebook to be the one deciding what qualifies as hate speech and what doesn’t?
What makes this kind of topic so difficult to discuss is that much of the content Facebook was accused of harboring is unpleasant in the extreme: Some of the pages mentioned in the complaint by the group Women, Action & the Media advocated violence against women, promoted rape, and made jokes about abuse (one of the tamer examples was a page called “Kicking Your Girlfriend in the Fanny Because She Won’t Make You a Sandwich”). No one in her right mind would argue that this kind of content isn’t offensive.
The larger problem in making Facebook take this kind of content down, however, is that it forces the network to take an even more active role in determining which of the comments, or photos, or videos posted by its billion or so users deserve to be seen and which don’t. In other words, it gives Facebook even more of a licence to practice what amounts to censorship—something the company routinely (and legitimately) gets criticized for doing.
To take just a few examples, Facebook has been repeatedly accused of removing content that promotes breast-feeding, presumably because it is seen as offensive by some—or perhaps because it trips the automatic filters that try to detect offensive content and send it to the team of regulators who actually police that sort of thing. The social network has also come under fire for removing pages related to the Middle East, as well as pages and content published by advocacy groups and dissidents in other parts of the world.
As Jillian York, the director for international freedom of expression at the Electronic Frontier Foundation, has pointed out, the entire concept of “hate speech” is a tricky one. In France, posting comments deemed homophobic or anti-Semitic is a crime, and Twitter is currently fighting a court order aimed at having the social network identify some of those who posted such comments. The company is resisting at least in part because it has staked its reputation on being the “free-speech wing of the free-speech party.”
Some groups have tried to convince Facebook that pages promoting heterosexuality qualify as hate speech, while others have complained that pages making fun of people who are overweight should fall into the same category. Many people would undoubtedly see the kind of content that Women, Action & the Media are complaining about as being clearly offensive in a way that these other pages aren’t—but not everyone would agree.
Where does Facebook draw the line on this particular slippery slope? Is it only the content that draws the most vocal criticism that gets removed, or the campaigns that influence advertisers?
As more than one free-speech advocate has noted, if popular protests about offensive content were what determined the content we were able to see or share a few decades ago, anything promoting homosexuality or half a dozen other topics would have vanished from our sight. There is at least a case to be made that the simplest course of action for a network like Facebook would be to remove content only when it is required to do so by law. But then what happens to the kind of content it just apologized for?
To its credit, the social network has tried to find other ways of discouraging these kinds of pages—including requesting page administrators to identify themselves (although the company’s “real name” policy raises some equally troubling questions). And while Facebook’s behavior looks and feels like censorship, it isn’t legally an infringement of free speech, because Facebook is a corporate entity, and free-speech rules apply only to governments.
And that fact about Facebook—that it is a proprietary platform controlled by private interests—is part of what makes this situation so complex.
For large numbers of people, the social network is a central method for connecting with and sharing information with their friends, a combination of water cooler and public square. But like Twitter, it is not a public square at all: It is more like a shopping mall, with private security that determines what behavior is tolerated and what isn’t.
That’s not a problem when you want security to remove the people who are offending or disturbing you, or when you agree with the company’s decisions—but it’s quite different when you are the one who is being accused of being offensive or disturbing. And Facebook has provided plenty of evidence that it can make just as many wrong choices as it can right ones.
Also from GigaOM
An Amazon Smartphone Makes Sense. A Facebook Phone Doesn’t (subscription required)