Another terrible and familiar story about the impact of online bullying has led a chastened company to change its ways.
This time it’s Ask.fm, a website where people can ask one another questions. It has about 60 million users worldwide, many of whom opt for anonymity and about half of whom are under 18 years old. The site has come under fire after a British teenager killed herself this month, leaving behind a trail of bullying messages directed at her by Ask.fm users. On Monday, the site announced it was making a number of changes to prevent such behavior, according to a statement shared with the blog Techcrunch, including making it easier to report inappropriate behavior and allowing users to opt out of receiving anonymous communications. Certain features of the site will no longer be available to those who don’t register with an e-mail address. Next year, it will hire more staff to moderate comments on the site.
Ask.fm has not responded to an interview request from Bloomberg Businessweek.
At this point, a pattern has formed around Web startups responding to bullying that takes place on their platforms. A quickly rising social network sees its user base grow far more quickly than its staff, who may not have anti-bullying safeguards at the top of their list of priorities. Eventually, these sites find themselves under fire for failing to prevent some form of abusive behavior among members. Most of the large American technology companies have fallen in line: Facebook, YouTube, and Twitter have all hired employees to monitor and respond to inappropriate behavior in which their users target one another.
Ask.fm, which is based in Latvia and sees most of its activity abroad, has long been seen as a laggard on bullying issues, according to Stephen Balkam, chief executive officer of the Family Online Safety Institute, an advocacy group. But Silicon Valley is gradually becoming more assertive on this issue. While anti-bullying activity generally spikes after a spate of bad publicity, tech companies are learning their lessons. “It’s like a cycle,” he says. “I’d say it’s more like a spiral than reinventing the wheel every time. The newer app companies are more aware that safety, trust, and privacy will come to bite them if they don’t deal with it upfront.”
Children’s privacy, meanwhile, is drawing increasing attention from regulators and lawmakers. Last month, the Federal Trade Commission updated its rules on what kind of personal data websites can collect from children online. Ask.fm’s problems stemmed more from the dangers of online anonymity, one of the flashpoint issues for those who think about life on the Web.
At its most basic level, the debate comes down to those who say anonymity enables abusive behavior and those who think it liberates people to speak freely. The Ask.fm example, at first, seems to fit firmly into the anti-anonymity argument that people will be unbelievably nasty to other people if they can hide behind a computer screen. But the explanation may not be so straightforward. The Sunday Times, a British newspaper, cited a source saying that when Ask.fm tracked the deceased teenager’s abusers back to their IP addresses, 98 percent of the messages came from her own computer—meaning she was allegedly sending hostile notes to herself.
The claim immediately drew criticism from people who said that Ask.fm was trying to deflect the blame. But there is a history of so-called digital self-harm on online services. Danah Boyd, a researcher who has studied the digital behavior of teenagers, once recalled a conversation she had with an employee at Formspring about what it found by examining patterns of abuse on its site:
As they started looking into specific cases of teens answering “anonymous” harassing questions, they started realizing that a number of vicious questions were posted by the Formspring account owners themselves. They appeared on Formspring as anonymous but they were written by the owner while logged into their own account. In other words, there are teens out there who are self-harassing by “anonymously” writing mean questions to themselves and then publicly answering them. This should make you stop and swallow hard. And then sit back and realize that it’s not that surprising.
This doesn’t really let Ask.fm off the hook. If it had taken responsibility for monitoring patterns of abusive behavior, it might have noticed what was happening regardless of the source and could have stepped in, perhaps even recognizing the pattern and identifying a cry for help. Of course, this would mean that people who run social platforms have to make an effort not only to observe what is happening on their sites but also to understand what it means. This isn’t always straightforward, says Balkam: “We’re seeing phenomena that we’ve never seen before.”