A #GamerGate Target Wants Twitter to Make Harassment Harder

The Twitter account used to harass game developer Brianna Wu—@chatterwhiteman— has been removed from the site. But Wu, who fled her home over the weekend after facing multiple rape and death threats, believes Twitter needs to do more to come to terms with how it enables misogynist abuse on its platform.

“Many women are frustrated with Twitter’s policies,” she says. “It makes it very easy to create an account—and then create another account—to harass people with.” Wu says she is still being bombarded with messages blaming her for provoking the tormentors.

The threats directed at Wu and several other women in the gaming world have been part of a rolling, hashtag-fueled controversy known as #GamerGate. Depending on who you talk to, the back-and-forth is either about the ethics of video-game journalism or the discomfort of some people within the industry with the increasingly prominent role of women. It is also the latest front in a much wider challenge facing Internet companies: how to balance free speech with freedom from harassment.

The social media industry has long had problems dealing with gender-based harassment. The default on social networks has always been to let as much content stay up as possible. Facebook, Twitter, Google, and even upstart social networks such as Ello have drawn criticism for not understanding how harassment plays out on their platforms. Amanda Hess, a journalist who documented her experiences with online harassment, points out that social media sites aren’t built to deal with women’s experience, in part because they “remain dominated by men, many of whom have little personal understanding of what women face online every day.”

This is a difficult subject for Twitter in particular. The company has spent a good deal of energy—and received a good deal of praise—for standing up to law enforcement requests for information about its users. Twitter requires subpoenas or court orders to release such information. Unlike Facebook, which uses algorithms to filter the content people see, Twitter is a raw feed of information. It has a strong incentive to present itself as a completely neutral platform allowing content that will be offensive to many of its users. Anonymity has also been a cornerstone of its service and offers a clear alternative to Facebook’s approach in the two sites’ competition.

The specific speech directed at Wu isn’t going to draw many public defenders. Twitter’s terms of service specifically forbid “direct, specific threats of violence,” as well as bigoted threats against groups of people. There are examples that blur this line, but posting someone’s home address and saying you’re going there to do harm—as happened to Wu—isn’t one of the edge cases.

Wu’s critique of Twitter is that it hasn’t built tools to protect its users sufficiently against speech that everyone has determined is unacceptable. Last year Twitter added a button allowing people to report abusive tweets, but Wu says the feature is useless when confronted with waves of vitriol. Advocates have been calling for tools that allow multiple accounts to be blocked at once.

More fundamental, Wu says the company needs to be more responsive to user’s  complaints. While the @chatterwhiteman account was removed quickly, Wu still hasn’t had any interaction with Twitter. She is relying on local law enforcement to try to extract IP addresses from the abusive accounts in an attempt to tie people to their online pseudonyms.

A spokesman for Twitter declined to comment on Wu’s case, citing privacy and security considerations. It received 1,257 requests for information from U.S. law enforcement officials in the first six months of this year, and it provided some information in about three-quarters of those cases, according to the company’s latest transparency report.

    Before it's here, it's on the Bloomberg Terminal.